Test Report: Docker_macOS 14848

                    
                      b63acb7dafa1eea311309da4a351492ab3bac7a2:2022-09-06:25602
                    
                

Test fail (24/287)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220906145358-22187 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0906 14:54:08.991903   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:55:30.915018   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:57:41.251034   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.257516   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.267903   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.289764   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.331930   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.414176   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.576379   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:41.898554   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:42.539480   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:43.819684   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:46.379943   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:57:47.063658   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:57:51.500271   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:58:01.742621   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220906145358-22187 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.230003423s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220906145358-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220906145358-22187 in cluster ingress-addon-legacy-20220906145358-22187
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 14:53:58.130769   25210 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:53:58.130922   25210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:58.130927   25210 out.go:309] Setting ErrFile to fd 2...
	I0906 14:53:58.130931   25210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:58.131034   25210 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 14:53:58.131628   25210 out.go:303] Setting JSON to false
	I0906 14:53:58.146662   25210 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6809,"bootTime":1662494429,"procs":332,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 14:53:58.146747   25210 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 14:53:58.168694   25210 out.go:177] * [ingress-addon-legacy-20220906145358-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 14:53:58.189997   25210 notify.go:193] Checking for updates...
	I0906 14:53:58.211614   25210 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 14:53:58.232851   25210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 14:53:58.253992   25210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 14:53:58.275918   25210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 14:53:58.298139   25210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 14:53:58.320253   25210 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 14:53:58.387632   25210 docker.go:137] docker version: linux-20.10.17
	I0906 14:53:58.387767   25210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:53:58.518526   25210 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 21:53:58.458990028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:53:58.561373   25210 out.go:177] * Using the docker driver based on user configuration
	I0906 14:53:58.583383   25210 start.go:284] selected driver: docker
	I0906 14:53:58.583417   25210 start.go:808] validating driver "docker" against <nil>
	I0906 14:53:58.583451   25210 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 14:53:58.586891   25210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:53:58.714590   25210 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 21:53:58.657938614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:53:58.714736   25210 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 14:53:58.714862   25210 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 14:53:58.736747   25210 out.go:177] * Using Docker Desktop driver with root privileges
	I0906 14:53:58.758509   25210 cni.go:95] Creating CNI manager for ""
	I0906 14:53:58.758565   25210 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 14:53:58.758593   25210 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220906145358-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220906145358-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:53:58.780413   25210 out.go:177] * Starting control plane node ingress-addon-legacy-20220906145358-22187 in cluster ingress-addon-legacy-20220906145358-22187
	I0906 14:53:58.802374   25210 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 14:53:58.824500   25210 out.go:177] * Pulling base image ...
	I0906 14:53:58.867426   25210 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 14:53:58.867412   25210 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 14:53:58.930411   25210 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 14:53:58.930435   25210 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 14:53:58.938046   25210 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0906 14:53:58.938067   25210 cache.go:57] Caching tarball of preloaded images
	I0906 14:53:58.938402   25210 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 14:53:58.982239   25210 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0906 14:53:59.003951   25210 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0906 14:53:59.113510   25210 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0906 14:54:03.945046   25210 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0906 14:54:03.945189   25210 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0906 14:54:04.575856   25210 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0906 14:54:04.576116   25210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/config.json ...
	I0906 14:54:04.576145   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/config.json: {Name:mk87c606faaa4a24573166da5ad878d0cbd7d7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:04.617886   25210 cache.go:208] Successfully downloaded all kic artifacts
	I0906 14:54:04.617949   25210 start.go:364] acquiring machines lock for ingress-addon-legacy-20220906145358-22187: {Name:mk8b4f4909c50158c1a2748b4ab31dd487493cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:54:04.618107   25210 start.go:368] acquired machines lock for "ingress-addon-legacy-20220906145358-22187" in 144.351µs
	I0906 14:54:04.618140   25210 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220906145358-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220906
145358-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 14:54:04.618207   25210 start.go:125] createHost starting for "" (driver="docker")
	I0906 14:54:04.665482   25210 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0906 14:54:04.665798   25210 start.go:159] libmachine.API.Create for "ingress-addon-legacy-20220906145358-22187" (driver="docker")
	I0906 14:54:04.665844   25210 client.go:168] LocalClient.Create starting
	I0906 14:54:04.666010   25210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem
	I0906 14:54:04.666078   25210 main.go:134] libmachine: Decoding PEM data...
	I0906 14:54:04.666103   25210 main.go:134] libmachine: Parsing certificate...
	I0906 14:54:04.666190   25210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem
	I0906 14:54:04.666242   25210 main.go:134] libmachine: Decoding PEM data...
	I0906 14:54:04.666262   25210 main.go:134] libmachine: Parsing certificate...
	I0906 14:54:04.667098   25210 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220906145358-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 14:54:04.729388   25210 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220906145358-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 14:54:04.729496   25210 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220906145358-22187] to gather additional debugging logs...
	I0906 14:54:04.729514   25210 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220906145358-22187
	W0906 14:54:04.790719   25210 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220906145358-22187 returned with exit code 1
	I0906 14:54:04.790747   25210 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220906145358-22187]: docker network inspect ingress-addon-legacy-20220906145358-22187: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220906145358-22187
	I0906 14:54:04.790792   25210 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220906145358-22187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220906145358-22187
	
	** /stderr **
	I0906 14:54:04.790911   25210 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 14:54:04.852218   25210 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c04060] misses:0}
	I0906 14:54:04.852257   25210 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 14:54:04.852271   25210 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220906145358-22187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 14:54:04.852366   25210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220906145358-22187 ingress-addon-legacy-20220906145358-22187
	I0906 14:54:04.951337   25210 network_create.go:99] docker network ingress-addon-legacy-20220906145358-22187 192.168.49.0/24 created
	I0906 14:54:04.951378   25210 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220906145358-22187" container
	I0906 14:54:04.951497   25210 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 14:54:05.012931   25210 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220906145358-22187 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220906145358-22187 --label created_by.minikube.sigs.k8s.io=true
	I0906 14:54:05.076253   25210 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220906145358-22187
	I0906 14:54:05.076402   25210 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220906145358-22187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220906145358-22187 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220906145358-22187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -d /var/lib
	I0906 14:54:05.498941   25210 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220906145358-22187
	I0906 14:54:05.498987   25210 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 14:54:05.499000   25210 kic.go:179] Starting extracting preloaded images to volume ...
	I0906 14:54:05.499149   25210 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220906145358-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 14:54:09.740337   25210 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220906145358-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir: (4.24106286s)
	I0906 14:54:09.740369   25210 kic.go:188] duration metric: took 4.241326 seconds to extract preloaded images to volume
	I0906 14:54:09.740474   25210 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 14:54:09.869309   25210 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220906145358-22187 --name ingress-addon-legacy-20220906145358-22187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220906145358-22187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220906145358-22187 --network ingress-addon-legacy-20220906145358-22187 --ip 192.168.49.2 --volume ingress-addon-legacy-20220906145358-22187:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d
	I0906 14:54:10.225670   25210 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Running}}
	I0906 14:54:10.291045   25210 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Status}}
	I0906 14:54:10.357248   25210 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220906145358-22187 stat /var/lib/dpkg/alternatives/iptables
	I0906 14:54:10.463979   25210 oci.go:144] the created container "ingress-addon-legacy-20220906145358-22187" has a running status.
	I0906 14:54:10.464003   25210 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa...
	I0906 14:54:10.725219   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0906 14:54:10.725277   25210 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 14:54:10.834311   25210 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Status}}
	I0906 14:54:10.896733   25210 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 14:54:10.896754   25210 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220906145358-22187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 14:54:11.000008   25210 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Status}}
	I0906 14:54:11.062518   25210 machine.go:88] provisioning docker machine ...
	I0906 14:54:11.063163   25210 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220906145358-22187"
	I0906 14:54:11.063260   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:11.125104   25210 main.go:134] libmachine: Using SSH client type: native
	I0906 14:54:11.126081   25210 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 56403 <nil> <nil>}
	I0906 14:54:11.126094   25210 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220906145358-22187 && echo "ingress-addon-legacy-20220906145358-22187" | sudo tee /etc/hostname
	I0906 14:54:11.249089   25210 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220906145358-22187
	
	I0906 14:54:11.249164   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:11.314504   25210 main.go:134] libmachine: Using SSH client type: native
	I0906 14:54:11.315211   25210 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 56403 <nil> <nil>}
	I0906 14:54:11.315228   25210 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220906145358-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220906145358-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220906145358-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 14:54:11.429909   25210 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 14:54:11.429932   25210 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 14:54:11.429970   25210 ubuntu.go:177] setting up certificates
	I0906 14:54:11.429981   25210 provision.go:83] configureAuth start
	I0906 14:54:11.430041   25210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:11.493889   25210 provision.go:138] copyHostCerts
	I0906 14:54:11.493923   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 14:54:11.493995   25210 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 14:54:11.494006   25210 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 14:54:11.494101   25210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 14:54:11.494253   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 14:54:11.494281   25210 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 14:54:11.494286   25210 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 14:54:11.494345   25210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 14:54:11.494449   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 14:54:11.494474   25210 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 14:54:11.494479   25210 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 14:54:11.494559   25210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 14:54:11.494737   25210 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220906145358-22187 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220906145358-22187]
	I0906 14:54:11.644049   25210 provision.go:172] copyRemoteCerts
	I0906 14:54:11.644101   25210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 14:54:11.644149   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:11.707624   25210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:54:11.791357   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 14:54:11.791451   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 14:54:11.807937   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 14:54:11.808015   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0906 14:54:11.824580   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 14:54:11.824668   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 14:54:11.841517   25210 provision.go:86] duration metric: configureAuth took 411.51902ms
	I0906 14:54:11.841530   25210 ubuntu.go:193] setting minikube options for container-runtime
	I0906 14:54:11.841666   25210 config.go:180] Loaded profile config "ingress-addon-legacy-20220906145358-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 14:54:11.841734   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:11.904812   25210 main.go:134] libmachine: Using SSH client type: native
	I0906 14:54:11.904985   25210 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 56403 <nil> <nil>}
	I0906 14:54:11.904998   25210 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 14:54:12.019323   25210 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 14:54:12.019335   25210 ubuntu.go:71] root file system type: overlay
	I0906 14:54:12.019540   25210 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 14:54:12.019615   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:12.083565   25210 main.go:134] libmachine: Using SSH client type: native
	I0906 14:54:12.083734   25210 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 56403 <nil> <nil>}
	I0906 14:54:12.083784   25210 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 14:54:12.204034   25210 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 14:54:12.204105   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:12.266861   25210 main.go:134] libmachine: Using SSH client type: native
	I0906 14:54:12.267037   25210 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 56403 <nil> <nil>}
	I0906 14:54:12.267051   25210 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 14:54:12.830678   25210 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 21:54:12.205196616 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 14:54:12.830720   25210 machine.go:91] provisioned docker machine in 1.767565459s
	I0906 14:54:12.830730   25210 client.go:171] LocalClient.Create took 8.164790257s
	I0906 14:54:12.830746   25210 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220906145358-22187" took 8.16487086s
	I0906 14:54:12.830757   25210 start.go:300] post-start starting for "ingress-addon-legacy-20220906145358-22187" (driver="docker")
	I0906 14:54:12.830762   25210 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 14:54:12.830821   25210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 14:54:12.830871   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:12.898927   25210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:54:12.983531   25210 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 14:54:12.986788   25210 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 14:54:12.986804   25210 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 14:54:12.986812   25210 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 14:54:12.986816   25210 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 14:54:12.986825   25210 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 14:54:12.986938   25210 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 14:54:12.987076   25210 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 14:54:12.987082   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 14:54:12.987279   25210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 14:54:12.993883   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 14:54:13.010740   25210 start.go:303] post-start completed in 179.972875ms
	I0906 14:54:13.011242   25210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:13.074503   25210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/config.json ...
	I0906 14:54:13.074900   25210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 14:54:13.074955   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:13.172261   25210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:54:13.257266   25210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 14:54:13.261404   25210 start.go:128] duration metric: createHost completed in 8.643104727s
	I0906 14:54:13.261422   25210 start.go:83] releasing machines lock for "ingress-addon-legacy-20220906145358-22187", held for 8.64321824s
	I0906 14:54:13.261488   25210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:13.324022   25210 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 14:54:13.324029   25210 ssh_runner.go:195] Run: systemctl --version
	I0906 14:54:13.324086   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:13.324131   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:13.389655   25210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:54:13.389760   25210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:54:13.469739   25210 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 14:54:13.622168   25210 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 14:54:13.622268   25210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 14:54:13.632009   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 14:54:13.644316   25210 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 14:54:13.714375   25210 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 14:54:13.779205   25210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 14:54:13.844370   25210 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 14:54:14.031273   25210 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 14:54:14.066075   25210 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 14:54:14.143049   25210 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0906 14:54:14.143234   25210 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220906145358-22187 dig +short host.docker.internal
	I0906 14:54:14.258857   25210 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 14:54:14.258972   25210 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 14:54:14.263087   25210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 14:54:14.272843   25210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:54:14.335482   25210 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 14:54:14.336099   25210 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 14:54:14.364525   25210 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0906 14:54:14.364543   25210 docker.go:542] Images already preloaded, skipping extraction
	I0906 14:54:14.364605   25210 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 14:54:14.393219   25210 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0906 14:54:14.393238   25210 cache_images.go:84] Images are preloaded, skipping loading
	I0906 14:54:14.393318   25210 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 14:54:14.465357   25210 cni.go:95] Creating CNI manager for ""
	I0906 14:54:14.465374   25210 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 14:54:14.465386   25210 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 14:54:14.465398   25210 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220906145358-22187 NodeName:ingress-addon-legacy-20220906145358-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 14:54:14.465519   25210 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220906145358-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 14:54:14.465610   25210 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220906145358-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220906145358-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 14:54:14.465667   25210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0906 14:54:14.473162   25210 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 14:54:14.473213   25210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 14:54:14.480125   25210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0906 14:54:14.492312   25210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0906 14:54:14.505268   25210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0906 14:54:14.517711   25210 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0906 14:54:14.521414   25210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 14:54:14.530945   25210 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187 for IP: 192.168.49.2
	I0906 14:54:14.531056   25210 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 14:54:14.531107   25210 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 14:54:14.531147   25210 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.key
	I0906 14:54:14.531159   25210 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.crt with IP's: []
	I0906 14:54:14.655020   25210 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.crt ...
	I0906 14:54:14.655031   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.crt: {Name:mkbf7e553143165332f483aad90c012dcd831be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.655325   25210 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.key ...
	I0906 14:54:14.655339   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/client.key: {Name:mk5aded8d8d813e4733d64b6cbb715b17b6c56a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.655522   25210 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key.dd3b5fb2
	I0906 14:54:14.655537   25210 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 14:54:14.765489   25210 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt.dd3b5fb2 ...
	I0906 14:54:14.765500   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt.dd3b5fb2: {Name:mk3c48080005ffdca709a47c25ba9d1e092fe4eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.765743   25210 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key.dd3b5fb2 ...
	I0906 14:54:14.765751   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key.dd3b5fb2: {Name:mk3a550a151190211cd4dffc133a87ebd78c5845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.765938   25210 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt
	I0906 14:54:14.766110   25210 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key
	I0906 14:54:14.766252   25210 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.key
	I0906 14:54:14.766285   25210 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.crt with IP's: []
	I0906 14:54:14.818896   25210 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.crt ...
	I0906 14:54:14.818907   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.crt: {Name:mk83fa26e521ad9fe2efaf0a9015c79527900dc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.819119   25210 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.key ...
	I0906 14:54:14.819127   25210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.key: {Name:mkc08826491f9ccc10b87281134a6d539a6100d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:54:14.819303   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 14:54:14.819332   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 14:54:14.819349   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 14:54:14.819365   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 14:54:14.819382   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 14:54:14.819398   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 14:54:14.819424   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 14:54:14.819441   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 14:54:14.819541   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 14:54:14.819582   25210 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 14:54:14.819591   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 14:54:14.819625   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 14:54:14.819653   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 14:54:14.819688   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 14:54:14.819754   25210 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 14:54:14.819783   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 14:54:14.819802   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 14:54:14.819823   25210 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 14:54:14.820273   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 14:54:14.837482   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 14:54:14.854013   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 14:54:14.870180   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/ingress-addon-legacy-20220906145358-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 14:54:14.887013   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 14:54:14.903183   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 14:54:14.919509   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 14:54:14.936254   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 14:54:14.952550   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 14:54:14.968804   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 14:54:14.984888   25210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 14:54:15.001060   25210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 14:54:15.013595   25210 ssh_runner.go:195] Run: openssl version
	I0906 14:54:15.019143   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 14:54:15.027530   25210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 14:54:15.031357   25210 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 14:54:15.031395   25210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 14:54:15.036481   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 14:54:15.043860   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 14:54:15.051531   25210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 14:54:15.055172   25210 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 14:54:15.055218   25210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 14:54:15.060634   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 14:54:15.068050   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 14:54:15.075969   25210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 14:54:15.079777   25210 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 14:54:15.079823   25210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 14:54:15.085097   25210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 14:54:15.092608   25210 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-20220906145358-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220906145358-22187 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:54:15.092701   25210 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 14:54:15.120656   25210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 14:54:15.127894   25210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 14:54:15.134910   25210 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 14:54:15.134958   25210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 14:54:15.141796   25210 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 14:54:15.141823   25210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 14:54:15.187356   25210 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0906 14:54:15.187405   25210 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 14:54:15.486829   25210 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 14:54:15.486908   25210 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 14:54:15.486989   25210 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 14:54:15.744944   25210 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 14:54:15.745253   25210 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 14:54:15.745300   25210 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 14:54:15.815528   25210 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 14:54:15.858865   25210 out.go:204]   - Generating certificates and keys ...
	I0906 14:54:15.858942   25210 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 14:54:15.859044   25210 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 14:54:16.124804   25210 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 14:54:16.216898   25210 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0906 14:54:16.333662   25210 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0906 14:54:16.478870   25210 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0906 14:54:16.612344   25210 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0906 14:54:16.612481   25210 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 14:54:16.710357   25210 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0906 14:54:16.710479   25210 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 14:54:16.844417   25210 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 14:54:17.201958   25210 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 14:54:17.279853   25210 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0906 14:54:17.279901   25210 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 14:54:17.417411   25210 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 14:54:17.584343   25210 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 14:54:17.791159   25210 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 14:54:17.938149   25210 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 14:54:17.938629   25210 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 14:54:17.960309   25210 out.go:204]   - Booting up control plane ...
	I0906 14:54:17.960523   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 14:54:17.960639   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 14:54:17.960773   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 14:54:17.960919   25210 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 14:54:17.961153   25210 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 14:54:57.921786   25210 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 14:54:57.922705   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:54:57.922850   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:55:02.920999   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:55:02.921204   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:55:12.915736   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:55:12.915961   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:55:32.902887   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:55:32.903065   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:56:12.875880   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:56:12.876064   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:56:12.876087   25210 kubeadm.go:317] 
	I0906 14:56:12.876140   25210 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0906 14:56:12.876176   25210 kubeadm.go:317] 		timed out waiting for the condition
	I0906 14:56:12.876179   25210 kubeadm.go:317] 
	I0906 14:56:12.876210   25210 kubeadm.go:317] 	This error is likely caused by:
	I0906 14:56:12.876242   25210 kubeadm.go:317] 		- The kubelet is not running
	I0906 14:56:12.876323   25210 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 14:56:12.876333   25210 kubeadm.go:317] 
	I0906 14:56:12.876402   25210 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 14:56:12.876431   25210 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0906 14:56:12.876467   25210 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0906 14:56:12.876477   25210 kubeadm.go:317] 
	I0906 14:56:12.876558   25210 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 14:56:12.876612   25210 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 14:56:12.876617   25210 kubeadm.go:317] 
	I0906 14:56:12.876684   25210 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0906 14:56:12.876727   25210 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0906 14:56:12.876793   25210 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0906 14:56:12.876830   25210 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0906 14:56:12.876849   25210 kubeadm.go:317] 
	I0906 14:56:12.878773   25210 kubeadm.go:317] W0906 21:54:15.192231     951 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 14:56:12.878860   25210 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 14:56:12.878956   25210 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
	I0906 14:56:12.879035   25210 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 14:56:12.879144   25210 kubeadm.go:317] W0906 21:54:17.949834     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 14:56:12.879237   25210 kubeadm.go:317] W0906 21:54:17.950676     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 14:56:12.879308   25210 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 14:56:12.879366   25210 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 14:56:12.879547   25210 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:54:15.192231     951 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:54:17.949834     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:54:17.950676     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220906145358-22187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:54:15.192231     951 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:54:17.949834     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:54:17.950676     951 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 14:56:12.879576   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 14:56:13.297897   25210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 14:56:13.306919   25210 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 14:56:13.306980   25210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 14:56:13.314146   25210 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 14:56:13.314168   25210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 14:56:13.358328   25210 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0906 14:56:13.358382   25210 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 14:56:13.661275   25210 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 14:56:13.661386   25210 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 14:56:13.661473   25210 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 14:56:13.923026   25210 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 14:56:13.923597   25210 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 14:56:13.923656   25210 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 14:56:13.994839   25210 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 14:56:14.016427   25210 out.go:204]   - Generating certificates and keys ...
	I0906 14:56:14.016496   25210 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 14:56:14.016565   25210 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 14:56:14.016628   25210 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 14:56:14.016712   25210 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 14:56:14.016779   25210 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 14:56:14.016827   25210 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 14:56:14.016888   25210 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 14:56:14.016951   25210 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 14:56:14.017016   25210 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 14:56:14.017081   25210 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 14:56:14.017123   25210 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 14:56:14.017164   25210 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 14:56:14.062744   25210 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 14:56:14.317126   25210 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 14:56:14.554676   25210 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 14:56:14.757864   25210 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 14:56:14.758650   25210 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 14:56:14.780048   25210 out.go:204]   - Booting up control plane ...
	I0906 14:56:14.780125   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 14:56:14.780195   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 14:56:14.780258   25210 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 14:56:14.780336   25210 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 14:56:14.780521   25210 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 14:56:54.740748   25210 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 14:56:54.741730   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:56:54.741928   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:56:59.738344   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:56:59.738490   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:57:09.733723   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:57:09.733922   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:57:29.721042   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:57:29.721277   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:58:09.694509   25210 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 14:58:09.694733   25210 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 14:58:09.694750   25210 kubeadm.go:317] 
	I0906 14:58:09.694797   25210 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0906 14:58:09.694846   25210 kubeadm.go:317] 		timed out waiting for the condition
	I0906 14:58:09.694855   25210 kubeadm.go:317] 
	I0906 14:58:09.694903   25210 kubeadm.go:317] 	This error is likely caused by:
	I0906 14:58:09.694942   25210 kubeadm.go:317] 		- The kubelet is not running
	I0906 14:58:09.695069   25210 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 14:58:09.695084   25210 kubeadm.go:317] 
	I0906 14:58:09.695198   25210 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 14:58:09.695236   25210 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0906 14:58:09.695271   25210 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0906 14:58:09.695281   25210 kubeadm.go:317] 
	I0906 14:58:09.695395   25210 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 14:58:09.695496   25210 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 14:58:09.695509   25210 kubeadm.go:317] 
	I0906 14:58:09.695601   25210 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0906 14:58:09.695661   25210 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0906 14:58:09.695735   25210 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0906 14:58:09.695772   25210 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0906 14:58:09.695782   25210 kubeadm.go:317] 
	I0906 14:58:09.698004   25210 kubeadm.go:317] W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 14:58:09.698093   25210 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 14:58:09.698208   25210 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
	I0906 14:58:09.698296   25210 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 14:58:09.698397   25210 kubeadm.go:317] W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 14:58:09.698485   25210 kubeadm.go:317] W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 14:58:09.698550   25210 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 14:58:09.698607   25210 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 14:58:09.698638   25210 kubeadm.go:398] StartCluster complete in 3m54.603702191s
	I0906 14:58:09.698705   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 14:58:09.726626   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.726639   25210 logs.go:276] No container was found matching "kube-apiserver"
	I0906 14:58:09.726698   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 14:58:09.755641   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.755653   25210 logs.go:276] No container was found matching "etcd"
	I0906 14:58:09.755708   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 14:58:09.783922   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.783935   25210 logs.go:276] No container was found matching "coredns"
	I0906 14:58:09.783991   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 14:58:09.811850   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.811863   25210 logs.go:276] No container was found matching "kube-scheduler"
	I0906 14:58:09.811926   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 14:58:09.840669   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.840681   25210 logs.go:276] No container was found matching "kube-proxy"
	I0906 14:58:09.840734   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 14:58:09.869312   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.869323   25210 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 14:58:09.869378   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 14:58:09.896802   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.896815   25210 logs.go:276] No container was found matching "storage-provisioner"
	I0906 14:58:09.896872   25210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 14:58:09.925001   25210 logs.go:274] 0 containers: []
	W0906 14:58:09.925013   25210 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 14:58:09.925020   25210 logs.go:123] Gathering logs for dmesg ...
	I0906 14:58:09.925027   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 14:58:09.936476   25210 logs.go:123] Gathering logs for describe nodes ...
	I0906 14:58:09.936488   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 14:58:09.986705   25210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 14:58:09.986716   25210 logs.go:123] Gathering logs for Docker ...
	I0906 14:58:09.986723   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 14:58:10.001720   25210 logs.go:123] Gathering logs for container status ...
	I0906 14:58:10.001734   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 14:58:12.056652   25210 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054884973s)
	I0906 14:58:12.056768   25210 logs.go:123] Gathering logs for kubelet ...
	I0906 14:58:12.056775   25210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0906 14:58:12.098520   25210 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 14:58:12.098542   25210 out.go:239] * 
	* 
	W0906 14:58:12.098677   25210 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 14:58:12.098696   25210 out.go:239] * 
	* 
	W0906 14:58:12.099329   25210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 14:58:12.164071   25210 out.go:177] 
	W0906 14:58:12.206059   25210 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0906 21:56:13.361700    3414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 21:56:14.767681    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 21:56:14.768746    3414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 14:58:12.206195   25210 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 14:58:12.206268   25210 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 14:58:12.228154   25210 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220906145358-22187 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220906145358-22187 addons enable ingress --alsologtostderr -v=5
E0906 14:58:14.756815   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:58:22.223280   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 14:59:03.185999   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220906145358-22187 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.110930973s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 14:58:12.398610   25539 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:58:12.398864   25539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:58:12.398869   25539 out.go:309] Setting ErrFile to fd 2...
	I0906 14:58:12.398872   25539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:58:12.398991   25539 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 14:58:12.420816   25539 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0906 14:58:12.442226   25539 config.go:180] Loaded profile config "ingress-addon-legacy-20220906145358-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 14:58:12.442258   25539 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220906145358-22187"
	I0906 14:58:12.442273   25539 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220906145358-22187"
	I0906 14:58:12.442784   25539 host.go:66] Checking if "ingress-addon-legacy-20220906145358-22187" exists ...
	I0906 14:58:12.443492   25539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Status}}
	I0906 14:58:12.529539   25539 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0906 14:58:12.550364   25539 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0906 14:58:12.571334   25539 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0906 14:58:12.592430   25539 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0906 14:58:12.613295   25539 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 14:58:12.613321   25539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0906 14:58:12.613422   25539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:58:12.676545   25539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:58:12.762529   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:12.822829   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:12.822856   25539 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:13.100389   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:13.151763   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:13.151784   25539 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:13.694129   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:13.747030   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:13.747050   25539 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:14.404124   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:14.454038   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:14.454059   25539 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:15.245866   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:15.296441   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:15.296458   25539 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:16.468331   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:16.520764   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:16.520779   25539 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:18.776181   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:18.827296   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:18.827310   25539 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:20.438236   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:20.489387   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:20.489408   25539 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:23.293884   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:23.344205   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:23.344219   25539 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:27.170805   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:27.222305   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:27.222318   25539 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:34.922130   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:34.975766   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:34.975780   25539 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:49.611705   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:58:49.662531   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:58:49.662545   25539 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:18.071558   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:59:18.124334   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:18.124348   25539 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:41.293058   25539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0906 14:59:41.342859   25539 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:41.342884   25539 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220906145358-22187"
	I0906 14:59:41.364693   25539 out.go:177] * Verifying ingress addon...
	I0906 14:59:41.387625   25539 out.go:177] 
	W0906 14:59:41.409822   25539 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220906145358-22187" does not exist: client config: context "ingress-addon-legacy-20220906145358-22187" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220906145358-22187" does not exist: client config: context "ingress-addon-legacy-20220906145358-22187" does not exist]
	W0906 14:59:41.409853   25539 out.go:239] * 
	* 
	W0906 14:59:41.414004   25539 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 14:59:41.435829   25539 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220906145358-22187
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220906145358-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e",
	        "Created": "2022-09-06T21:54:09.932666785Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T21:54:10.220315726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hosts",
	        "LogPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e-json.log",
	        "Name": "/ingress-addon-legacy-20220906145358-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220906145358-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220906145358-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220906145358-22187",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220906145358-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220906145358-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "836d1168acb1dbdd7bb19f1e65ded0bb31e36fc341412ce094225d3ca7b665cb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56405"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/836d1168acb1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220906145358-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a92507b2d6f",
	                        "ingress-addon-legacy-20220906145358-22187"
	                    ],
	                    "NetworkID": "9cfcddf22a693c1a7ac4fa3ac7421e644f62258483b1ab18e4d7b5855af41ebf",
	                    "EndpointID": "f2a0d25a714ca42d831675618877e59ed13c21fd2ea0d1318087fb006046dfb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187: exit status 6 (409.748962ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 14:59:41.927130   25625 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220906145358-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220906145358-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220906145358-22187 addons enable ingress-dns --alsologtostderr -v=5
E0906 15:00:25.107222   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220906145358-22187 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.030929907s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 14:59:41.986471   25635 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:59:41.986660   25635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:59:41.986665   25635 out.go:309] Setting ErrFile to fd 2...
	I0906 14:59:41.986669   25635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:59:41.986761   25635 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 14:59:42.009622   25635 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0906 14:59:42.038128   25635 config.go:180] Loaded profile config "ingress-addon-legacy-20220906145358-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 14:59:42.038161   25635 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220906145358-22187"
	I0906 14:59:42.038174   25635 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220906145358-22187"
	I0906 14:59:42.038696   25635 host.go:66] Checking if "ingress-addon-legacy-20220906145358-22187" exists ...
	I0906 14:59:42.039621   25635 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220906145358-22187 --format={{.State.Status}}
	I0906 14:59:42.123565   25635 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0906 14:59:42.145741   25635 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0906 14:59:42.167355   25635 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 14:59:42.167379   25635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0906 14:59:42.167457   25635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220906145358-22187
	I0906 14:59:42.231374   25635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56403 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/ingress-addon-legacy-20220906145358-22187/id_rsa Username:docker}
	I0906 14:59:42.321704   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:42.370908   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:42.370928   25635 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:42.649330   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:42.699732   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:42.699747   25635 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:43.240436   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:43.289943   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:43.289962   25635 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:43.945203   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:43.993895   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:43.993912   25635 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:44.785293   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:44.836261   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:44.836274   25635 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:46.008764   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:46.060242   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:46.060286   25635 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:48.313563   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:48.364009   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:48.364026   25635 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:49.975314   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:50.024518   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:50.024532   25635 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:52.831007   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:52.882484   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:52.882499   25635 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:56.708149   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 14:59:56.757091   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 14:59:56.757104   25635 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:04.456225   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 15:00:04.507452   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:04.507467   25635 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:19.143300   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 15:00:19.194966   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:19.194983   25635 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:47.602234   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 15:00:47.654099   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:00:47.654115   25635 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:01:10.823128   25635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0906 15:01:10.878963   25635 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0906 15:01:10.900877   25635 out.go:177] 
	W0906 15:01:10.921611   25635 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0906 15:01:10.921634   25635 out.go:239] * 
	* 
	W0906 15:01:10.924863   25635 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:01:10.945556   25635 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220906145358-22187
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220906145358-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e",
	        "Created": "2022-09-06T21:54:09.932666785Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T21:54:10.220315726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hosts",
	        "LogPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e-json.log",
	        "Name": "/ingress-addon-legacy-20220906145358-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220906145358-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220906145358-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220906145358-22187",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220906145358-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220906145358-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "836d1168acb1dbdd7bb19f1e65ded0bb31e36fc341412ce094225d3ca7b665cb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56405"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/836d1168acb1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220906145358-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a92507b2d6f",
	                        "ingress-addon-legacy-20220906145358-22187"
	                    ],
	                    "NetworkID": "9cfcddf22a693c1a7ac4fa3ac7421e644f62258483b1ab18e4d7b5855af41ebf",
	                    "EndpointID": "f2a0d25a714ca42d831675618877e59ed13c21fd2ea0d1318087fb006046dfb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187: exit status 6 (410.037198ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:01:11.435038   25732 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220906145358-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220906145358-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220906145358-22187
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220906145358-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e",
	        "Created": "2022-09-06T21:54:09.932666785Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T21:54:10.220315726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/hosts",
	        "LogPath": "/var/lib/docker/containers/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e/3a92507b2d6f17d1f33a0008e2f4965ebecad6a9e4013c6e8849f546be6cba5e-json.log",
	        "Name": "/ingress-addon-legacy-20220906145358-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220906145358-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220906145358-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0d98189f6e6143bd52663b32cf9b8004a80c337781a091282e7cef41ec6c401/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220906145358-22187",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220906145358-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220906145358-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220906145358-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "836d1168acb1dbdd7bb19f1e65ded0bb31e36fc341412ce094225d3ca7b665cb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56405"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/836d1168acb1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220906145358-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a92507b2d6f",
	                        "ingress-addon-legacy-20220906145358-22187"
	                    ],
	                    "NetworkID": "9cfcddf22a693c1a7ac4fa3ac7421e644f62258483b1ab18e4d7b5855af41ebf",
	                    "EndpointID": "f2a0d25a714ca42d831675618877e59ed13c21fd2ea0d1318087fb006046dfb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220906145358-22187 -n ingress-addon-legacy-20220906145358-22187: exit status 6 (410.911115ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:01:11.912749   25746 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220906145358-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220906145358-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (238.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220906150606-22187
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220906150606-22187
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220906150606-22187: (36.695568009s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true -v=8 --alsologtostderr
E0906 15:12:41.256482   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:12:47.072229   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true -v=8 --alsologtostderr: exit status 80 (3m16.666083941s)

                                                
                                                
-- stdout --
	* [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220906150606-22187" ...
	* Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:09:52.249113   28549 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:09:52.249279   28549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:09:52.249284   28549 out.go:309] Setting ErrFile to fd 2...
	I0906 15:09:52.249288   28549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:09:52.249395   28549 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:09:52.249834   28549 out.go:303] Setting JSON to false
	I0906 15:09:52.265079   28549 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7763,"bootTime":1662494429,"procs":330,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:09:52.265176   28549 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:09:52.287908   28549 out.go:177] * [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:09:52.330043   28549 notify.go:193] Checking for updates...
	I0906 15:09:52.351753   28549 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:09:52.373885   28549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:52.395109   28549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:09:52.416813   28549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:09:52.438126   28549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:09:52.460653   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:09:52.460741   28549 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:09:52.529017   28549 docker.go:137] docker version: linux-20.10.17
	I0906 15:09:52.529142   28549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:09:52.657876   28549 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:09:52.596148613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:09:52.701583   28549 out.go:177] * Using the docker driver based on existing profile
	I0906 15:09:52.723683   28549 start.go:284] selected driver: docker
	I0906 15:09:52.723709   28549 start.go:808] validating driver "docker" against &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:52.723901   28549 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:09:52.724037   28549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:09:52.854484   28549 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:09:52.792922001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:09:52.856623   28549 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:09:52.856648   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:09:52.856657   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:09:52.856671   28549 start_flags.go:310] config:
	{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:52.900403   28549 out.go:177] * Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	I0906 15:09:52.921438   28549 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:09:52.943183   28549 out.go:177] * Pulling base image ...
	I0906 15:09:52.986303   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:09:52.986305   28549 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:09:52.986350   28549 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:09:52.986364   28549 cache.go:57] Caching tarball of preloaded images
	I0906 15:09:52.986482   28549 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:09:52.986502   28549 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:09:52.987047   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:09:53.047916   28549 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:09:53.047933   28549 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:09:53.047944   28549 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:09:53.048001   28549 start.go:364] acquiring machines lock for multinode-20220906150606-22187: {Name:mk1f646be94138ec52cb695dba30aa00d55e22df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:09:53.048114   28549 start.go:368] acquired machines lock for "multinode-20220906150606-22187" in 91.342µs
	I0906 15:09:53.048135   28549 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:09:53.048145   28549 fix.go:55] fixHost starting: 
	I0906 15:09:53.048402   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:09:53.110627   28549 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187: state=Stopped err=<nil>
	W0906 15:09:53.110654   28549 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:09:53.154328   28549 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187" ...
	I0906 15:09:53.175453   28549 cli_runner.go:164] Run: docker start multinode-20220906150606-22187
	I0906 15:09:53.507425   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:09:53.571161   28549 kic.go:415] container "multinode-20220906150606-22187" state is running.
	I0906 15:09:53.571743   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:53.638862   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:09:53.639261   28549 machine.go:88] provisioning docker machine ...
	I0906 15:09:53.639282   28549 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187"
	I0906 15:09:53.639365   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:53.704265   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:53.704456   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:53.704468   28549 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187 && echo "multinode-20220906150606-22187" | sudo tee /etc/hostname
	I0906 15:09:53.826717   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187
	
	I0906 15:09:53.826793   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:53.891178   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:53.891333   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:53.891347   28549 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:09:54.003154   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:09:54.003177   28549 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:09:54.003192   28549 ubuntu.go:177] setting up certificates
	I0906 15:09:54.003205   28549 provision.go:83] configureAuth start
	I0906 15:09:54.003273   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:54.129783   28549 provision.go:138] copyHostCerts
	I0906 15:09:54.129831   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:09:54.129904   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:09:54.129921   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:09:54.130043   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:09:54.130221   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:09:54.130250   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:09:54.130254   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:09:54.130317   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:09:54.130457   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:09:54.130483   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:09:54.130489   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:09:54.130549   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:09:54.130667   28549 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187]
	I0906 15:09:54.167995   28549 provision.go:172] copyRemoteCerts
	I0906 15:09:54.168061   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:09:54.168114   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.232559   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:54.314058   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:09:54.314145   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:09:54.332104   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:09:54.332177   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:09:54.352099   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:09:54.352169   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:09:54.369496   28549 provision.go:86] duration metric: configureAuth took 366.277095ms
	I0906 15:09:54.369509   28549 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:09:54.369685   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:09:54.369744   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.434492   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.434691   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.434702   28549 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:09:54.544696   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:09:54.544711   28549 ubuntu.go:71] root file system type: overlay
	I0906 15:09:54.544889   28549 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:09:54.544960   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.607160   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.607338   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.607390   28549 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:09:54.726430   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:09:54.726508   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.788587   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.788784   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.788801   28549 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:09:54.903682   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:09:54.903701   28549 machine.go:91] provisioned docker machine in 1.264428825s
	I0906 15:09:54.903711   28549 start.go:300] post-start starting for "multinode-20220906150606-22187" (driver="docker")
	I0906 15:09:54.903716   28549 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:09:54.903789   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:09:54.903850   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.966693   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.047662   28549 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:09:55.050803   28549 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:09:55.050811   28549 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:09:55.050814   28549 command_runner.go:130] > ID=ubuntu
	I0906 15:09:55.050817   28549 command_runner.go:130] > ID_LIKE=debian
	I0906 15:09:55.050821   28549 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:09:55.050831   28549 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:09:55.050836   28549 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:09:55.050843   28549 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:09:55.050848   28549 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:09:55.050857   28549 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:09:55.050861   28549 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:09:55.050876   28549 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:09:55.050924   28549 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:09:55.050938   28549 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:09:55.050957   28549 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:09:55.050964   28549 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:09:55.050975   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:09:55.051087   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:09:55.051224   28549 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:09:55.051230   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:09:55.051374   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:09:55.058035   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:09:55.074524   28549 start.go:303] post-start completed in 170.804101ms
	I0906 15:09:55.074592   28549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:09:55.074639   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.137425   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.218052   28549 command_runner.go:130] > 12%
	I0906 15:09:55.218120   28549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:09:55.222006   28549 command_runner.go:130] > 49G
	I0906 15:09:55.222235   28549 fix.go:57] fixHost completed within 2.174086462s
	I0906 15:09:55.222246   28549 start.go:83] releasing machines lock for "multinode-20220906150606-22187", held for 2.174119704s
	I0906 15:09:55.222314   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:55.285473   28549 ssh_runner.go:195] Run: systemctl --version
	I0906 15:09:55.285474   28549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:09:55.285549   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.285631   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.353413   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.353718   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.484608   28549 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:09:55.484656   28549 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0906 15:09:55.484670   28549 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0906 15:09:55.484778   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:09:55.491732   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:09:55.504103   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:55.573539   28549 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:09:55.650492   28549 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:09:55.660451   28549 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:09:55.660618   28549 command_runner.go:130] > [Unit]
	I0906 15:09:55.660629   28549 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:09:55.660636   28549 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:09:55.660644   28549 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:09:55.660650   28549 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:09:55.660653   28549 command_runner.go:130] > Wants=network-online.target
	I0906 15:09:55.660661   28549 command_runner.go:130] > Requires=docker.socket
	I0906 15:09:55.660666   28549 command_runner.go:130] > StartLimitBurst=3
	I0906 15:09:55.660673   28549 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:09:55.660678   28549 command_runner.go:130] > [Service]
	I0906 15:09:55.660683   28549 command_runner.go:130] > Type=notify
	I0906 15:09:55.660690   28549 command_runner.go:130] > Restart=on-failure
	I0906 15:09:55.660698   28549 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:09:55.660705   28549 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:09:55.660711   28549 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:09:55.660716   28549 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:09:55.660721   28549 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:09:55.660727   28549 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:09:55.660734   28549 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:09:55.660744   28549 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:09:55.660751   28549 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:09:55.660755   28549 command_runner.go:130] > ExecStart=
	I0906 15:09:55.660767   28549 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:09:55.660772   28549 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:09:55.660777   28549 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:09:55.660798   28549 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:09:55.660806   28549 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:09:55.660810   28549 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:09:55.660814   28549 command_runner.go:130] > LimitCORE=infinity
	I0906 15:09:55.660818   28549 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:09:55.660822   28549 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:09:55.660827   28549 command_runner.go:130] > TasksMax=infinity
	I0906 15:09:55.660830   28549 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:09:55.660835   28549 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:09:55.660839   28549 command_runner.go:130] > Delegate=yes
	I0906 15:09:55.660844   28549 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:09:55.660847   28549 command_runner.go:130] > KillMode=process
	I0906 15:09:55.660858   28549 command_runner.go:130] > [Install]
	I0906 15:09:55.660867   28549 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:09:55.661555   28549 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:09:55.661609   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:09:55.670610   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:09:55.682055   28549 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:09:55.682066   28549 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:09:55.682716   28549 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:09:55.745594   28549 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:09:55.809234   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:55.885824   28549 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:09:56.120243   28549 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:09:56.183495   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:56.248991   28549 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:09:56.258266   28549 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:09:56.258348   28549 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:09:56.262063   28549 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:09:56.262079   28549 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:09:56.262086   28549 command_runner.go:130] > Device: 96h/150d	Inode: 114         Links: 1
	I0906 15:09:56.262095   28549 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:09:56.262105   28549 command_runner.go:130] > Access: 2022-09-06 22:09:55.594302366 +0000
	I0906 15:09:56.262110   28549 command_runner.go:130] > Modify: 2022-09-06 22:09:55.594302366 +0000
	I0906 15:09:56.262115   28549 command_runner.go:130] > Change: 2022-09-06 22:09:55.595302366 +0000
	I0906 15:09:56.262119   28549 command_runner.go:130] >  Birth: -
	I0906 15:09:56.262197   28549 start.go:471] Will wait 60s for crictl version
	I0906 15:09:56.262239   28549 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:09:56.289764   28549 command_runner.go:130] > Version:  0.1.0
	I0906 15:09:56.289775   28549 command_runner.go:130] > RuntimeName:  docker
	I0906 15:09:56.289778   28549 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:09:56.289782   28549 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:09:56.291804   28549 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:09:56.291879   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:09:56.324013   28549 command_runner.go:130] > 20.10.17
	I0906 15:09:56.327098   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:09:56.359718   28549 command_runner.go:130] > 20.10.17
	I0906 15:09:56.406489   28549 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:09:56.406607   28549 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187 dig +short host.docker.internal
	I0906 15:09:56.527846   28549 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:09:56.527954   28549 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:09:56.532014   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:09:56.541444   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:56.605087   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:09:56.605164   28549 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:09:56.632176   28549 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:09:56.632190   28549 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:09:56.632195   28549 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:09:56.632202   28549 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:09:56.632206   28549 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:09:56.632211   28549 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:09:56.632214   28549 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:09:56.632220   28549 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:09:56.632224   28549 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:09:56.632228   28549 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:09:56.632231   28549 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:09:56.635153   28549 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:09:56.635172   28549 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:09:56.635303   28549 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:09:56.660686   28549 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:09:56.660699   28549 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:09:56.660703   28549 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:09:56.660707   28549 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:09:56.660710   28549 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:09:56.660714   28549 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:09:56.660718   28549 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:09:56.660733   28549 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:09:56.660737   28549 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:09:56.660741   28549 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:09:56.660754   28549 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:09:56.663751   28549 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:09:56.663767   28549 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:09:56.663845   28549 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:09:56.733245   28549 command_runner.go:130] > systemd
	I0906 15:09:56.736255   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:09:56.736268   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:09:56.736287   28549 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:09:56.736297   28549 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:09:56.736411   28549 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:09:56.736496   28549 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:09:56.736555   28549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:09:56.743040   28549 command_runner.go:130] > kubeadm
	I0906 15:09:56.743047   28549 command_runner.go:130] > kubectl
	I0906 15:09:56.743050   28549 command_runner.go:130] > kubelet
	I0906 15:09:56.743595   28549 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:09:56.743641   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:09:56.750261   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0906 15:09:56.763192   28549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:09:56.775146   28549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0906 15:09:56.787316   28549 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:09:56.790851   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:09:56.799949   28549 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.2
	I0906 15:09:56.800049   28549 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:09:56.800100   28549 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:09:56.800173   28549 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.key
	I0906 15:09:56.800238   28549 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key.cee25041
	I0906 15:09:56.800293   28549 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key
	I0906 15:09:56.800300   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 15:09:56.800320   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 15:09:56.800350   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 15:09:56.800368   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 15:09:56.800384   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:09:56.800398   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:09:56.800413   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:09:56.800428   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:09:56.800539   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:09:56.800576   28549 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:09:56.800592   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:09:56.800626   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:09:56.800663   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:09:56.800692   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:09:56.800752   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:09:56.800783   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:56.800805   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:09:56.800823   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:09:56.801304   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:09:56.818154   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:09:56.834407   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:09:56.850832   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:09:56.867454   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:09:56.883833   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:09:56.900099   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:09:56.916879   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:09:56.934296   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:09:56.951005   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:09:56.967840   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:09:56.984366   28549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:09:56.996732   28549 ssh_runner.go:195] Run: openssl version
	I0906 15:09:57.001487   28549 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:09:57.001802   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:09:57.009559   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013118   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013201   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013240   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.017989   28549 command_runner.go:130] > b5213941
	I0906 15:09:57.018343   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:09:57.025210   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:09:57.033032   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036786   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036946   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036984   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.041698   28549 command_runner.go:130] > 51391683
	I0906 15:09:57.042008   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:09:57.049449   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:09:57.056973   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060800   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060824   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060862   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.065703   28549 command_runner.go:130] > 3ec20f2e
	I0906 15:09:57.066065   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:09:57.073101   28549 kubeadm.go:396] StartCluster: {Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:57.073206   28549 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:09:57.101666   28549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:09:57.108633   28549 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 15:09:57.108647   28549 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 15:09:57.108670   28549 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 15:09:57.108677   28549 command_runner.go:130] > member
	I0906 15:09:57.109421   28549 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:09:57.109435   28549 kubeadm.go:627] restartCluster start
	I0906 15:09:57.109481   28549 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:09:57.116223   28549 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.116281   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:57.179468   28549 kubeconfig.go:116] verify returned: extract IP: "multinode-20220906150606-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:57.179551   28549 kubeconfig.go:127] "multinode-20220906150606-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:09:57.179804   28549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:09:57.180492   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:57.180721   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:09:57.181039   28549 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 15:09:57.181209   28549 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:09:57.188647   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.188700   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.196805   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.398928   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.399097   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.408772   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.597668   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.597761   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.608757   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.798723   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.798862   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.808812   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.996893   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.996985   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.005735   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.198879   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.198959   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.208958   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.398351   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.398450   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.408754   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.598855   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.599021   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.608294   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.796970   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.797072   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.808361   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.997634   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.997814   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.007557   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.198962   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.199103   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.209185   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.398497   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.398622   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.408643   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.597533   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.597690   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.607164   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.798962   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.799094   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.810038   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.998952   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.999087   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.009819   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.199014   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:10:00.199147   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.208656   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.208666   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:10:00.208709   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.216363   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.216375   28549 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:10:00.216382   28549 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:10:00.216437   28549 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:10:00.243805   28549 command_runner.go:130] > df0852bc7a51
	I0906 15:10:00.243819   28549 command_runner.go:130] > 1ed0dda0b42e
	I0906 15:10:00.243823   28549 command_runner.go:130] > a34f733a43c2
	I0906 15:10:00.243826   28549 command_runner.go:130] > c307966101ca
	I0906 15:10:00.243830   28549 command_runner.go:130] > 3c2093315054
	I0906 15:10:00.243833   28549 command_runner.go:130] > fdc326cd3c6a
	I0906 15:10:00.243837   28549 command_runner.go:130] > 4e3670b1600d
	I0906 15:10:00.243841   28549 command_runner.go:130] > 6bd8b364f108
	I0906 15:10:00.243844   28549 command_runner.go:130] > 6d68f544bf54
	I0906 15:10:00.243851   28549 command_runner.go:130] > a165f2074320
	I0906 15:10:00.243854   28549 command_runner.go:130] > 28bc9837a510
	I0906 15:10:00.243857   28549 command_runner.go:130] > 33a1b253bd37
	I0906 15:10:00.243861   28549 command_runner.go:130] > 0c0974b47f92
	I0906 15:10:00.243865   28549 command_runner.go:130] > c27dff0f48e6
	I0906 15:10:00.243869   28549 command_runner.go:130] > 77d6030ab01b
	I0906 15:10:00.243874   28549 command_runner.go:130] > defb450e84c2
	I0906 15:10:00.246728   28549 docker.go:443] Stopping containers: [df0852bc7a51 1ed0dda0b42e a34f733a43c2 c307966101ca 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2]
	I0906 15:10:00.246801   28549 ssh_runner.go:195] Run: docker stop df0852bc7a51 1ed0dda0b42e a34f733a43c2 c307966101ca 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2
	I0906 15:10:00.271650   28549 command_runner.go:130] > df0852bc7a51
	I0906 15:10:00.271820   28549 command_runner.go:130] > 1ed0dda0b42e
	I0906 15:10:00.272033   28549 command_runner.go:130] > a34f733a43c2
	I0906 15:10:00.272042   28549 command_runner.go:130] > c307966101ca
	I0906 15:10:00.272050   28549 command_runner.go:130] > 3c2093315054
	I0906 15:10:00.272056   28549 command_runner.go:130] > fdc326cd3c6a
	I0906 15:10:00.272065   28549 command_runner.go:130] > 4e3670b1600d
	I0906 15:10:00.272297   28549 command_runner.go:130] > 6bd8b364f108
	I0906 15:10:00.272303   28549 command_runner.go:130] > 6d68f544bf54
	I0906 15:10:00.272318   28549 command_runner.go:130] > a165f2074320
	I0906 15:10:00.272323   28549 command_runner.go:130] > 28bc9837a510
	I0906 15:10:00.272328   28549 command_runner.go:130] > 33a1b253bd37
	I0906 15:10:00.272333   28549 command_runner.go:130] > 0c0974b47f92
	I0906 15:10:00.272338   28549 command_runner.go:130] > c27dff0f48e6
	I0906 15:10:00.272343   28549 command_runner.go:130] > 77d6030ab01b
	I0906 15:10:00.272352   28549 command_runner.go:130] > defb450e84c2
	I0906 15:10:00.275422   28549 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:10:00.285214   28549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:10:00.291920   28549 command_runner.go:130] > -rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	I0906 15:10:00.291931   28549 command_runner.go:130] > -rw------- 1 root root 5656 Sep  6 22:06 /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.291936   28549 command_runner.go:130] > -rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	I0906 15:10:00.291946   28549 command_runner.go:130] > -rw------- 1 root root 5600 Sep  6 22:06 /etc/kubernetes/scheduler.conf
	I0906 15:10:00.292869   28549 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:06 /etc/kubernetes/scheduler.conf
	
	I0906 15:10:00.292915   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:10:00.299598   28549 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:10:00.300311   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:10:00.306656   28549 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:10:00.307414   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.314205   28549 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.314263   28549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.321057   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:10:00.328298   28549 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.328346   28549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:10:00.334828   28549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:10:00.341880   28549 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:10:00.341893   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:00.380872   28549 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:10:00.380888   28549 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 15:10:00.380954   28549 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 15:10:00.381325   28549 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:10:00.382035   28549 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 15:10:00.382044   28549 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:10:00.382048   28549 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 15:10:00.382375   28549 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 15:10:00.382548   28549 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:10:00.383177   28549 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:10:00.383189   28549 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:10:00.383403   28549 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 15:10:00.386570   28549 command_runner.go:130] ! W0906 22:10:00.392914    1106 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:00.386587   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:00.426694   28549 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:10:00.589592   28549 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0906 15:10:00.685244   28549 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0906 15:10:00.936853   28549 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:10:01.134938   28549 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:10:01.139172   28549 command_runner.go:130] ! W0906 22:10:00.438679    1116 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.139201   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.189116   28549 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:10:01.189692   28549 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:10:01.189864   28549 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 15:10:01.259629   28549 command_runner.go:130] ! W0906 22:10:01.192033    1138 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.259647   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.299337   28549 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:10:01.299355   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:10:01.304593   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:10:01.305432   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:10:01.308987   28549 command_runner.go:130] ! W0906 22:10:01.310921    1172 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.309011   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.360596   28549 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:10:01.366630   28549 command_runner.go:130] ! W0906 22:10:01.371856    1188 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.366667   28549 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:10:01.366730   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:01.913225   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:02.413164   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:02.423684   28549 command_runner.go:130] > 1664
	I0906 15:10:02.423862   28549 api_server.go:71] duration metric: took 1.057205507s to wait for apiserver process to appear ...
	I0906 15:10:02.423883   28549 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:10:02.423902   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:02.425131   28549 api_server.go:256] stopped: https://127.0.0.1:57200/healthz: Get "https://127.0.0.1:57200/healthz": EOF
	I0906 15:10:02.925542   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.360035   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:10:05.360049   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:10:05.425330   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.433768   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:05.433787   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:05.926806   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.933750   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:05.933765   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:06.425202   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:06.431557   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:06.431574   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:06.925298   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:06.931859   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 200:
	ok
	I0906 15:10:06.931916   28549 round_trippers.go:463] GET https://127.0.0.1:57200/version
	I0906 15:10:06.931921   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:06.931928   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:06.931934   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:06.938009   28549 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:10:06.938019   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:06.938024   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:06.938029   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:06.938034   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:06 GMT
	I0906 15:10:06.938040   28549 round_trippers.go:580]     Audit-Id: 1e243c70-94be-4fec-b6f9-31bf75252e92
	I0906 15:10:06.938044   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:06.938049   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:06.938054   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:06.938073   28549 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:10:06.938122   28549 api_server.go:140] control plane version: v1.25.0
	I0906 15:10:06.938129   28549 api_server.go:130] duration metric: took 4.5142281s to wait for apiserver health ...
	I0906 15:10:06.938134   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:10:06.938141   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:10:06.961636   28549 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 15:10:06.982476   28549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 15:10:06.987759   28549 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 15:10:06.987773   28549 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0906 15:10:06.987780   28549 command_runner.go:130] > Device: 8eh/142d	Inode: 267134      Links: 1
	I0906 15:10:06.987788   28549 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 15:10:06.987805   28549 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:10:06.987814   28549 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:10:06.987822   28549 command_runner.go:130] > Change: 2022-09-06 21:44:51.197359839 +0000
	I0906 15:10:06.987829   28549 command_runner.go:130] >  Birth: -
	I0906 15:10:06.988166   28549 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.0/kubectl ...
	I0906 15:10:06.988174   28549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0906 15:10:07.001486   28549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 15:10:08.001946   28549 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:10:08.005028   28549 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:10:08.008987   28549 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 15:10:08.020307   28549 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 15:10:08.030092   28549 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.02857736s)
	I0906 15:10:08.030120   28549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:10:08.030180   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:08.030185   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.030191   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.030197   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.034496   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:08.034509   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.034514   28549 round_trippers.go:580]     Audit-Id: 0076a17a-44ea-4fd7-be39-ccac2b826ad8
	I0906 15:10:08.034519   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.034530   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.034537   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.034543   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.034550   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.037686   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"720"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84179 chars]
	I0906 15:10:08.040647   28549 system_pods.go:59] 12 kube-system pods found
	I0906 15:10:08.040662   28549 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:08.040673   28549 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:08.040680   28549 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:08.040683   28549 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:08.040687   28549 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:08.040695   28549 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:10:08.040700   28549 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:08.040704   28549 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:08.040707   28549 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:08.040711   28549 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:08.040715   28549 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:10:08.040721   28549 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running
	I0906 15:10:08.040725   28549 system_pods.go:74] duration metric: took 10.600213ms to wait for pod list to return data ...
	I0906 15:10:08.040731   28549 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:10:08.040768   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:08.040772   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.040778   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.040784   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.043531   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.043544   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.043552   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.043561   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.043569   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.043574   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.043579   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.043583   28549 round_trippers.go:580]     Audit-Id: 98cf2dab-30f1-49e4-befe-b2dea3ce89db
	I0906 15:10:08.044185   28549 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"720"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16412 chars]
	I0906 15:10:08.044888   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044907   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044920   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044926   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044931   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044939   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044946   28549 node_conditions.go:105] duration metric: took 4.210209ms to run NodePressure ...
	I0906 15:10:08.044966   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:08.236877   28549 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 15:10:08.310612   28549 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 15:10:08.314079   28549 command_runner.go:130] ! W0906 22:10:08.133832    2389 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:08.314099   28549 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:10:08.314148   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0906 15:10:08.314153   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.314159   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.314165   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.317077   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.317089   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.317095   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.317127   28549 round_trippers.go:580]     Audit-Id: 34232c55-f461-4f54-8ef6-b8a79984f74c
	I0906 15:10:08.317133   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.317137   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.317142   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.317147   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.317380   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"368","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.adve [truncated 30664 chars]
	I0906 15:10:08.318110   28549 kubeadm.go:778] kubelet initialised
	I0906 15:10:08.318118   28549 kubeadm.go:779] duration metric: took 4.012273ms waiting for restarted kubelet to initialise ...
	I0906 15:10:08.318126   28549 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:08.318160   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:08.318165   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.318171   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.318177   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.321144   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.321157   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.321164   28549 round_trippers.go:580]     Audit-Id: ad350e2b-21d4-47e7-ad0f-330fa3160745
	I0906 15:10:08.321171   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.321177   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.321183   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.321188   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.321193   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.322905   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84179 chars]
	I0906 15:10:08.324801   28549 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.324848   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:08.324853   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.324859   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.324866   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.326845   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.326855   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.326860   28549 round_trippers.go:580]     Audit-Id: e11f06ae-6ad7-4233-a87e-19d865b0b514
	I0906 15:10:08.326865   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.326870   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.326878   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.326883   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.326888   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.326944   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6357 chars]
	I0906 15:10:08.327210   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.327216   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.327222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.327227   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.329072   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.329081   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.329087   28549 round_trippers.go:580]     Audit-Id: 7f7972d1-75fc-4c25-865f-6afa7f3961cb
	I0906 15:10:08.329092   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.329096   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.329101   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.329106   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.329111   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.329279   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.329466   28549 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:08.329471   28549 pod_ready.go:81] duration metric: took 4.658673ms waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.329477   28549 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.329503   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:10:08.329508   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.329513   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.329518   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.331343   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.331352   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.331358   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.331363   28549 round_trippers.go:580]     Audit-Id: 56583076-1ca9-4009-aeb0-929b451b72f4
	I0906 15:10:08.331368   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.331374   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.331378   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.331384   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.331694   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"368","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 5906 chars]
	I0906 15:10:08.331917   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.331923   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.331932   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.331937   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.333889   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.333898   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.333903   28549 round_trippers.go:580]     Audit-Id: d7dc63db-b0a6-44f1-8289-2df9fec46c77
	I0906 15:10:08.333908   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.333913   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.333917   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.333922   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.333927   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.334075   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.334247   28549 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:08.334253   28549 pod_ready.go:81] duration metric: took 4.77224ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.334262   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.334294   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:08.334299   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.334304   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.334309   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.336457   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.336464   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.336469   28549 round_trippers.go:580]     Audit-Id: 014453c6-2bf5-431e-871f-02eca14b5180
	I0906 15:10:08.336474   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.336479   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.336484   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.336488   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.336493   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.336579   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:08.336833   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.336838   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.336844   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.336850   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.338331   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.338339   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.338345   28549 round_trippers.go:580]     Audit-Id: 08d4e436-3650-40a8-adfa-91ed0e6bf3d6
	I0906 15:10:08.338349   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.338354   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.338361   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.338367   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.338371   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.338686   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.840390   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:08.840413   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.840443   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.840479   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.843856   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:08.843871   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.843882   28549 round_trippers.go:580]     Audit-Id: 1ef375e7-272d-459a-9173-a59617518416
	I0906 15:10:08.843892   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.843900   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.843907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.843913   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.843923   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.844440   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:08.844821   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.844830   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.844838   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.844845   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.846824   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.846833   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.846838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.846843   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.846847   28549 round_trippers.go:580]     Audit-Id: a5d571c3-8969-4883-b1da-c116d2869e69
	I0906 15:10:08.846852   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.846856   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.846861   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.847019   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:09.339078   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:09.339104   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.339115   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.339125   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.342748   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:09.342765   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.342773   28549 round_trippers.go:580]     Audit-Id: c07f9b3f-e2dd-42f3-aba8-9879878b8e79
	I0906 15:10:09.342779   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.342787   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.342793   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.342800   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.342806   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.342924   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:09.343201   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:09.343206   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.343212   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.343218   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.345117   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:09.345128   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.345134   28549 round_trippers.go:580]     Audit-Id: 3344fb03-0980-490e-9032-7ce3e7279e77
	I0906 15:10:09.345141   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.345153   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.345163   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.345170   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.345175   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.345336   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:09.839071   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:09.839084   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.839107   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.839112   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.841411   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:09.841422   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.841427   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.841432   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.841437   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.841442   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.841446   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.841450   28549 round_trippers.go:580]     Audit-Id: 5114bc2f-d424-4c2d-9c30-140694b9ff92
	I0906 15:10:09.841561   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:09.841849   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:09.841855   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.841863   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.841868   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.843568   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:09.843578   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.843583   28549 round_trippers.go:580]     Audit-Id: ca742599-67ab-4897-a9bd-7cd08983bee4
	I0906 15:10:09.843588   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.843593   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.843597   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.843601   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.843605   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.843878   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:10.339423   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:10.339446   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.339458   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.339468   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.343378   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:10.343392   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.343405   28549 round_trippers.go:580]     Audit-Id: adfd21fa-d007-4deb-99a9-384ce3521f5c
	I0906 15:10:10.343415   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.343429   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.343438   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.343445   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.343453   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.343556   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:10.343953   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:10.343962   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.343971   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.343987   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.345896   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:10.345905   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.345911   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.345918   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.345923   28549 round_trippers.go:580]     Audit-Id: 1e522e4c-de52-4d57-befc-ce3825897cc9
	I0906 15:10:10.345927   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.345935   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.345940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.345995   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:10.346184   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:10.839437   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:10.839457   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.839466   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.839473   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.842491   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:10.842503   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.842510   28549 round_trippers.go:580]     Audit-Id: 0d20f7dd-2af0-4648-b72e-9964414712f6
	I0906 15:10:10.842517   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.842523   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.842527   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.842533   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.842537   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.843307   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:10.845148   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:10.845157   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.845164   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.845169   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.847540   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:10.847551   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.847556   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.847561   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.847566   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.847570   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.847576   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.847580   28549 round_trippers.go:580]     Audit-Id: 5dfb9918-56cd-4496-ac14-614468834a72
	I0906 15:10:10.847631   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:11.339107   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:11.339134   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.339177   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.339193   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.342129   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.342141   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.342150   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.342155   28549 round_trippers.go:580]     Audit-Id: 31ad3468-261e-4623-b9cc-4d24583e6bff
	I0906 15:10:11.342161   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.342165   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.342170   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.342174   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.342458   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:11.342740   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:11.342746   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.342752   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.342757   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.344571   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:11.344581   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.344586   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.344593   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.344599   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.344604   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.344608   28549 round_trippers.go:580]     Audit-Id: 8205c1a9-17f6-477e-b9f6-f44f045630f1
	I0906 15:10:11.344613   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.344651   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:11.839068   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:11.839088   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.839097   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.839118   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.841682   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.841693   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.841698   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.841715   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.841723   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.841728   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.841736   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.841744   28549 round_trippers.go:580]     Audit-Id: 074cf248-e2fa-418e-a383-8b506c3051de
	I0906 15:10:11.842110   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:11.842386   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:11.842392   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.842397   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.842402   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.844435   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.844445   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.844451   28549 round_trippers.go:580]     Audit-Id: 47b52112-289d-4d7b-b5be-5f511f033807
	I0906 15:10:11.844458   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.844466   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.844474   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.844481   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.844488   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.844564   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.339709   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:12.339722   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.339738   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.339744   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.342186   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:12.342196   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.342202   28549 round_trippers.go:580]     Audit-Id: e69feef5-fb8e-488b-bc50-73763c330c65
	I0906 15:10:12.342206   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.342212   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.342216   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.342221   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.342225   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.342302   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:12.342597   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:12.342604   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.342610   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.342616   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.344572   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:12.344581   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.344589   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.344594   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.344599   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.344604   28549 round_trippers.go:580]     Audit-Id: b9ae28aa-2cc6-44be-b4b8-b518adbc6134
	I0906 15:10:12.344608   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.344613   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.344653   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.838999   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:12.839031   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.839065   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.839076   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.841956   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:12.841969   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.841977   28549 round_trippers.go:580]     Audit-Id: 26aded15-6640-4241-96d3-fab002e7a9c4
	I0906 15:10:12.841982   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.841987   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.841995   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.842002   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.842008   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.842233   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:12.842513   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:12.842518   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.842524   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.842530   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.844525   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:12.844545   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.844557   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.844565   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.844572   28549 round_trippers.go:580]     Audit-Id: 2860ecd9-cd3f-4e76-880c-341e070be1f2
	I0906 15:10:12.844577   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.844583   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.844589   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.844641   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.844825   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:13.339052   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:13.339068   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.339076   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.339084   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.341934   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:13.341947   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.341953   28549 round_trippers.go:580]     Audit-Id: 0214da62-133b-4bba-94b8-4428258da43a
	I0906 15:10:13.341962   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.341970   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.341975   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.341980   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.341984   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.342052   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:13.342332   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:13.342338   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.342344   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.342372   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.344199   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:13.344207   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.344212   28549 round_trippers.go:580]     Audit-Id: 60530d9e-23f2-40a2-b69b-af3f89ce4bcd
	I0906 15:10:13.344217   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.344222   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.344226   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.344231   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.344236   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.344381   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:13.839036   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:13.839047   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.839053   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.839058   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.841203   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:13.841213   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.841219   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.841223   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.841227   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.841232   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.841237   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.841243   28549 round_trippers.go:580]     Audit-Id: de8fea88-770f-4291-bde9-e5853b543fbe
	I0906 15:10:13.841522   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:13.841816   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:13.841823   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.841829   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.841834   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.843383   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:13.843391   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.843396   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.843402   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.843410   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.843417   28549 round_trippers.go:580]     Audit-Id: ca94da72-2562-4812-b318-d17e3f58648f
	I0906 15:10:13.843423   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.843428   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.843605   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:14.339157   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:14.339190   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.339202   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.339210   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.342031   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:14.342043   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.342049   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.342054   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.342060   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.342064   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.342069   28549 round_trippers.go:580]     Audit-Id: 68bf7fbb-9fd4-4fd2-ac69-afb9bccba288
	I0906 15:10:14.342073   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.342144   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:14.342430   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:14.342437   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.342443   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.342449   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.344360   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:14.344370   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.344375   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.344380   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.344385   28549 round_trippers.go:580]     Audit-Id: f2bca961-04be-4d0a-8d62-5e7242bd708e
	I0906 15:10:14.344390   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.344395   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.344399   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.344444   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:14.839198   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:14.839209   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.839215   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.839221   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.841822   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:14.841833   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.841838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.841844   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.841848   28549 round_trippers.go:580]     Audit-Id: 7eccb69d-94e3-4c88-b851-cce67b037c8c
	I0906 15:10:14.841853   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.841858   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.841862   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.841935   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:14.842207   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:14.842213   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.842219   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.842228   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.843971   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:14.843983   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.843988   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.843993   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.844005   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.844020   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.844033   28549 round_trippers.go:580]     Audit-Id: ed7eb447-372a-4b3a-a2bd-4882d89bfcef
	I0906 15:10:14.844040   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.844234   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:15.340006   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:15.340029   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.340042   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.340051   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.343290   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:15.343301   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.343307   28549 round_trippers.go:580]     Audit-Id: 9d1626b4-9e32-4742-a67a-1a12e7aab82f
	I0906 15:10:15.343318   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.343324   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.343328   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.343336   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.343341   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.343535   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:15.343818   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:15.343824   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.343830   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.343835   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.345521   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:15.345531   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.345538   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.345546   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.345552   28549 round_trippers.go:580]     Audit-Id: 5131f6a6-74bd-4c56-9fbc-a7d8fa11a24c
	I0906 15:10:15.345557   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.345563   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.345567   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.345885   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:15.346062   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:15.839595   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:15.839615   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.839626   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.839636   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.843534   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:15.843546   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.843552   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.843556   28549 round_trippers.go:580]     Audit-Id: ad811e0b-209a-4183-93e0-13bd82297ca1
	I0906 15:10:15.843561   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.843566   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.843570   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.843575   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.843676   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:15.843953   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:15.843959   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.843965   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.843969   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.845852   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:15.845861   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.845867   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.845874   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.845879   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.845884   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.845889   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.845894   28549 round_trippers.go:580]     Audit-Id: fb9b9b5f-6260-4f7a-b44c-df9dd7204a64
	I0906 15:10:15.845937   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:16.339143   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:16.339157   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.339165   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.339172   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.342260   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:16.342270   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.342275   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.342280   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.342285   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.342289   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.342294   28549 round_trippers.go:580]     Audit-Id: 6ebe6407-718a-4a80-95e0-2291dca56ad7
	I0906 15:10:16.342299   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.342392   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:16.342667   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:16.342672   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.342678   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.342683   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.344495   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:16.344504   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.344509   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.344514   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.344519   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.344523   28549 round_trippers.go:580]     Audit-Id: 3018b8d5-10c9-4784-90ac-9220fa47e525
	I0906 15:10:16.344528   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.344533   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.344574   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:16.839076   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:16.839099   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.839134   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.839147   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.841957   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:16.841970   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.841975   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.841980   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.841985   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.841989   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.841993   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.841998   28549 round_trippers.go:580]     Audit-Id: b5f6f0a9-aeb6-4ac7-81d1-62ebe1f363ab
	I0906 15:10:16.842061   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:16.842338   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:16.842344   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.842350   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.842354   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.843917   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:16.843925   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.843931   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.843935   28549 round_trippers.go:580]     Audit-Id: d98c9752-7b29-443d-b789-81df92ad4623
	I0906 15:10:16.843940   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.843945   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.843950   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.843955   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.844490   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.339024   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:17.339042   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.339050   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.339057   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.341525   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:17.341534   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.341540   28549 round_trippers.go:580]     Audit-Id: 3a74029d-207f-4e86-b98b-6cc0acffebb6
	I0906 15:10:17.341544   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.341549   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.341554   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.341558   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.341563   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.341624   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:17.341908   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:17.341914   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.341920   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.341925   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.343803   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:17.343811   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.343816   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.343821   28549 round_trippers.go:580]     Audit-Id: 51c7bd0e-9df3-4c26-afed-f8b7f7259c26
	I0906 15:10:17.343827   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.343832   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.343836   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.343841   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.343886   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.839304   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:17.839321   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.839333   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.839352   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.841871   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:17.841880   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.841886   28549 round_trippers.go:580]     Audit-Id: 3a31913c-f101-40e0-88e0-432208120eb0
	I0906 15:10:17.841890   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.841895   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.841899   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.841904   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.841914   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.842249   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:17.842523   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:17.842529   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.842535   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.842540   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.844383   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:17.844392   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.844399   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.844408   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.844413   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.844421   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.844433   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.844444   28549 round_trippers.go:580]     Audit-Id: 12cdb2a4-6e55-41b9-9fc5-9855cb60d052
	I0906 15:10:17.844689   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.844875   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:18.340600   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:18.340660   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.340675   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.340689   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.344790   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:18.344806   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.344814   28549 round_trippers.go:580]     Audit-Id: 48f99909-2ae5-4c09-b87f-a6490253d814
	I0906 15:10:18.344820   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.344826   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.344832   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.344838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.344845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.344942   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:18.345221   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:18.345227   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.345233   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.345238   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.347131   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:18.347140   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.347147   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.347153   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.347158   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.347164   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.347169   28549 round_trippers.go:580]     Audit-Id: f7b23df2-5bc7-4ece-bf79-cbd3c49381c2
	I0906 15:10:18.347174   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.347220   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:18.841160   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:18.841182   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.841194   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.841204   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.844525   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:18.844537   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.844544   28549 round_trippers.go:580]     Audit-Id: 00ed7db5-557f-4715-a0f8-53dd9a71372e
	I0906 15:10:18.844549   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.844553   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.844559   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.844563   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.844568   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.844649   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:18.844930   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:18.844936   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.844942   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.844947   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.846853   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:18.846864   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.846871   28549 round_trippers.go:580]     Audit-Id: fe82df8b-3aa8-4287-92bc-21a241aaf673
	I0906 15:10:18.846876   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.846881   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.846900   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.846907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.846912   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.846967   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.339361   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:19.339384   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.339396   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.339429   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.343234   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:19.343247   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.343254   28549 round_trippers.go:580]     Audit-Id: b48e0157-4f07-4d32-aa98-a2b5f0ff3870
	I0906 15:10:19.343261   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.343267   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.343273   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.343279   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.343287   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.343404   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:19.343772   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:19.343780   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.343788   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.343795   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.345564   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:19.345573   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.345579   28549 round_trippers.go:580]     Audit-Id: 8e53a287-5c53-4922-a4f5-c1b0747e8b36
	I0906 15:10:19.345584   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.345588   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.345593   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.345598   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.345603   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.345643   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.839470   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:19.839500   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.839510   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.839517   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.842592   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:19.842601   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.842607   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.842612   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.842616   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.842621   28549 round_trippers.go:580]     Audit-Id: 427b2e47-3bfa-46ab-8f29-328a589ca153
	I0906 15:10:19.842626   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.842630   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.842699   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:19.842982   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:19.842989   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.842995   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.843000   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.844999   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:19.845010   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.845016   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.845022   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.845027   28549 round_trippers.go:580]     Audit-Id: ae8ee28f-68b0-406b-aa9e-e18e376b7ebf
	I0906 15:10:19.845031   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.845036   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.845040   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.845230   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.845431   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:20.339816   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:20.339835   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.339847   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.339856   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.343462   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:20.343472   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.343478   28549 round_trippers.go:580]     Audit-Id: 985b7b29-14ac-46f2-af25-3609509f3f7f
	I0906 15:10:20.343483   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.343488   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.343492   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.343497   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.343502   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.343572   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:20.343852   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:20.343858   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.343864   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.343869   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.345787   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:20.345796   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.345802   28549 round_trippers.go:580]     Audit-Id: 22db0a5e-d906-4d0d-a867-6b0700dee4c5
	I0906 15:10:20.345809   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.345816   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.345821   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.345826   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.345831   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.346081   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:20.839990   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:20.840006   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.840014   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.840022   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.843155   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:20.843171   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.843180   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.843189   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.843195   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.843202   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.843209   28549 round_trippers.go:580]     Audit-Id: 774a07a2-7d0e-4d41-ad81-b802e2db28f9
	I0906 15:10:20.843213   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.843286   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:20.843564   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:20.843570   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.843576   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.843581   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.845482   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:20.845491   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.845497   28549 round_trippers.go:580]     Audit-Id: a7996f4b-dbd8-4368-b8f9-85111b96fbfb
	I0906 15:10:20.845501   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.845507   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.845512   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.845517   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.845521   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.845559   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.340997   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:21.341036   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.341096   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.341108   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.344338   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.344350   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.344356   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.344363   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.344374   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.344383   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.344387   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.344392   28549 round_trippers.go:580]     Audit-Id: 17b20b2a-8af5-4b3d-a0df-3b022604aad0
	I0906 15:10:21.344471   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"793","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8471 chars]
	I0906 15:10:21.344744   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.344749   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.344755   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.344760   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.346614   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.346623   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.346629   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.346634   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.346642   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.346649   28549 round_trippers.go:580]     Audit-Id: fe5de989-dc99-4f86-aca0-e012b3e57093
	I0906 15:10:21.346656   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.346663   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.346721   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.346899   28549 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.346910   28549 pod_ready.go:81] duration metric: took 13.012599203s waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.346918   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.346944   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:10:21.346949   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.346955   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.346961   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.348873   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.348882   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.348887   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.348891   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.348896   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.348901   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.348906   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.348910   28549 round_trippers.go:580]     Audit-Id: e227e4e0-76e6-4bf2-a5b6-97b3998865f5
	I0906 15:10:21.348961   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"768","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 8044 chars]
	I0906 15:10:21.349207   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.349213   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.349218   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.349229   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.351026   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.351035   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.351040   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.351046   28549 round_trippers.go:580]     Audit-Id: cd3efd18-23e4-4ff6-bb69-a9619ca15d65
	I0906 15:10:21.351056   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.351061   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.351066   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.351071   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.351111   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.351282   28549 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.351288   28549 pod_ready.go:81] duration metric: took 4.364684ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.351293   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.351317   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:10:21.351321   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.351327   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.351332   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.352854   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.352864   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.352869   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.352875   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.352879   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.352885   28549 round_trippers.go:580]     Audit-Id: 790ccaef-5484-4d6c-82b2-3e5f02145fc2
	I0906 15:10:21.352890   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.352894   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.352934   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"672","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5772 chars]
	I0906 15:10:21.353161   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:10:21.353166   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.353172   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.353177   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.354679   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.354687   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.354692   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.354697   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.354701   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.354706   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.354710   28549 round_trippers.go:580]     Audit-Id: c009e443-4fee-4d7e-9efb-94d7a83314ea
	I0906 15:10:21.354716   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.355011   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m03","uid":"268cefad-05d1-4e4b-b44e-2d8678e78e39","resourceVersion":"685","creationTimestamp":"2022-09-06T22:09:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:09:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostnam
e":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Upd [truncated 4408 chars]
	I0906 15:10:21.355171   28549 pod_ready.go:92] pod "kube-proxy-czbjx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.355181   28549 pod_ready.go:81] duration metric: took 3.883796ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.355187   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.355211   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:21.355215   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.355221   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.355226   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.356811   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.356819   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.356824   28549 round_trippers.go:580]     Audit-Id: d2f93656-abfa-4313-aee3-082467b35dcb
	I0906 15:10:21.356828   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.356834   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.356839   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.356843   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.356848   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.357208   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"749","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5762 chars]
	I0906 15:10:21.357428   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.357434   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.357441   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.357446   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.359175   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.359183   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.359188   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.359194   28549 round_trippers.go:580]     Audit-Id: 1b840522-c0a6-45df-ae6e-018e1ea1fbc6
	I0906 15:10:21.359200   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.359205   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.359219   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.359229   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.359273   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.359473   28549 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.359478   28549 pod_ready.go:81] duration metric: took 4.287168ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.359484   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.359508   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:21.359512   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.359518   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.359523   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.361190   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.361201   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.361208   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.361214   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.361220   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.361227   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.361233   28549 round_trippers.go:580]     Audit-Id: 54fb5d05-38bf-494b-943a-712cf0a16b99
	I0906 15:10:21.361239   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.361340   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"476","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5565 chars]
	I0906 15:10:21.361562   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:21.361568   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.361574   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.361579   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.363237   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.363247   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.363252   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.363257   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.363263   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.363268   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.363274   28549 round_trippers.go:580]     Audit-Id: 69ab58ef-d69f-4d8b-87c2-2737433c22fd
	I0906 15:10:21.363279   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.363326   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05","resourceVersion":"602","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4591 chars]
	I0906 15:10:21.363477   28549 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.363485   28549 pod_ready.go:81] duration metric: took 3.996705ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.363490   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.542259   28549 request.go:533] Waited for 178.688593ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:21.542317   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:21.542325   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.542334   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.542343   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.545474   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.545487   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.545492   28549 round_trippers.go:580]     Audit-Id: 40b070f3-38fc-4d9d-8df0-c3e1bcf5608d
	I0906 15:10:21.545498   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.545503   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.545508   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.545514   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.545518   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.545563   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"780","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4928 chars]
	I0906 15:10:21.741068   28549 request.go:533] Waited for 195.255208ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.741118   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.741126   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.741138   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.741153   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.744098   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.744110   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.744116   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.744120   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.744124   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.744129   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.744134   28549 round_trippers.go:580]     Audit-Id: c2a2df9f-e749-45ea-ae89-8bb1c4f22f95
	I0906 15:10:21.744139   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.744185   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.744377   28549 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.744383   28549 pod_ready.go:81] duration metric: took 380.882419ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.744390   28549 pod_ready.go:38] duration metric: took 13.426212653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:21.744403   28549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:10:21.752010   28549 command_runner.go:130] > -16
	I0906 15:10:21.752125   28549 ops.go:34] apiserver oom_adj: -16
	I0906 15:10:21.752133   28549 kubeadm.go:631] restartCluster took 24.642618508s
	I0906 15:10:21.752142   28549 kubeadm.go:398] StartCluster complete in 24.678973392s
	I0906 15:10:21.752158   28549 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:10:21.752237   28549 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.752629   28549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:10:21.753292   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.753465   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:21.753649   28549 round_trippers.go:463] GET https://127.0.0.1:57200/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 15:10:21.753655   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.753661   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.753667   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.755996   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.756005   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.756010   28549 round_trippers.go:580]     Audit-Id: bf9df806-cc5a-4084-a4a3-2786162f021a
	I0906 15:10:21.756017   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.756022   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.756027   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.756032   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.756037   28549 round_trippers.go:580]     Content-Length: 291
	I0906 15:10:21.756042   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.756052   28549 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a49f3069-8a92-4785-ab5f-7ea0a1721073","resourceVersion":"789","creationTimestamp":"2022-09-06T22:06:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0906 15:10:21.756138   28549 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220906150606-22187" rescaled to 1
	I0906 15:10:21.756169   28549 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:10:21.756175   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:10:21.756199   28549 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0906 15:10:21.756313   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:21.777725   28549 out.go:177] * Verifying Kubernetes components...
	I0906 15:10:21.777787   28549 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220906150606-22187"
	I0906 15:10:21.777810   28549 addons.go:65] Setting default-storageclass=true in profile "multinode-20220906150606-22187"
	I0906 15:10:21.777814   28549 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220906150606-22187"
	W0906 15:10:21.826051   28549 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:10:21.826050   28549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220906150606-22187"
	I0906 15:10:21.810787   28549 command_runner.go:130] > apiVersion: v1
	I0906 15:10:21.826092   28549 command_runner.go:130] > data:
	I0906 15:10:21.826065   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:10:21.826106   28549 command_runner.go:130] >   Corefile: |
	I0906 15:10:21.826120   28549 command_runner.go:130] >     .:53 {
	I0906 15:10:21.826127   28549 command_runner.go:130] >         errors
	I0906 15:10:21.826135   28549 command_runner.go:130] >         health {
	I0906 15:10:21.826143   28549 command_runner.go:130] >            lameduck 5s
	I0906 15:10:21.826149   28549 command_runner.go:130] >         }
	I0906 15:10:21.826156   28549 command_runner.go:130] >         ready
	I0906 15:10:21.826177   28549 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0906 15:10:21.826189   28549 command_runner.go:130] >            pods insecure
	I0906 15:10:21.826197   28549 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0906 15:10:21.826206   28549 command_runner.go:130] >            ttl 30
	I0906 15:10:21.826212   28549 command_runner.go:130] >         }
	I0906 15:10:21.826220   28549 command_runner.go:130] >         prometheus :9153
	I0906 15:10:21.826229   28549 command_runner.go:130] >         hosts {
	I0906 15:10:21.826187   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:21.826236   28549 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0906 15:10:21.826246   28549 command_runner.go:130] >            fallthrough
	I0906 15:10:21.826252   28549 command_runner.go:130] >         }
	I0906 15:10:21.826259   28549 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0906 15:10:21.826267   28549 command_runner.go:130] >            max_concurrent 1000
	I0906 15:10:21.826275   28549 command_runner.go:130] >         }
	I0906 15:10:21.826284   28549 command_runner.go:130] >         cache 30
	I0906 15:10:21.826293   28549 command_runner.go:130] >         loop
	I0906 15:10:21.826322   28549 command_runner.go:130] >         reload
	I0906 15:10:21.826328   28549 command_runner.go:130] >         loadbalance
	I0906 15:10:21.826334   28549 command_runner.go:130] >     }
	I0906 15:10:21.826339   28549 command_runner.go:130] > kind: ConfigMap
	I0906 15:10:21.826343   28549 command_runner.go:130] > metadata:
	I0906 15:10:21.826349   28549 command_runner.go:130] >   creationTimestamp: "2022-09-06T22:06:35Z"
	I0906 15:10:21.826353   28549 command_runner.go:130] >   name: coredns
	I0906 15:10:21.826358   28549 command_runner.go:130] >   namespace: kube-system
	I0906 15:10:21.826363   28549 command_runner.go:130] >   resourceVersion: "371"
	I0906 15:10:21.826370   28549 command_runner.go:130] >   uid: 99586de8-1370-4877-aa2d-6bd1c7354337
	I0906 15:10:21.826430   28549 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:10:21.826561   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.827298   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.837326   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:21.897455   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.923245   28549 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:10:21.923717   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:21.960625   28549 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:10:21.960647   28549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:10:21.960783   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:21.961038   28549 round_trippers.go:463] GET https://127.0.0.1:57200/apis/storage.k8s.io/v1/storageclasses
	I0906 15:10:21.961056   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.961945   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.962094   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.965829   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.965858   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.965866   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.965872   28549 round_trippers.go:580]     Content-Length: 1273
	I0906 15:10:21.965877   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.965887   28549 round_trippers.go:580]     Audit-Id: 404bccf5-0825-4fe3-ab9f-6998c764af66
	I0906 15:10:21.965893   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.965900   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.965908   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.966646   28549 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0906 15:10:21.967748   28549 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:10:21.967792   28549 round_trippers.go:463] PUT https://127.0.0.1:57200/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 15:10:21.967797   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.967803   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.967809   28549 round_trippers.go:473]     Content-Type: application/json
	I0906 15:10:21.967814   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.971606   28549 node_ready.go:35] waiting up to 6m0s for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:10:21.971676   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.971680   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.971686   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.971692   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.973022   28549 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:10:21.973053   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.973064   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.973070   28549 round_trippers.go:580]     Content-Length: 1220
	I0906 15:10:21.973074   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.973084   28549 round_trippers.go:580]     Audit-Id: 6db5d999-9ff1-4d21-aac7-bfc89c0eea42
	I0906 15:10:21.973090   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.973096   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.973102   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.973124   28549 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:10:21.973207   28549 addons.go:153] Setting addon default-storageclass=true in "multinode-20220906150606-22187"
	W0906 15:10:21.973214   28549 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:10:21.973232   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:21.973565   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.974650   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.974695   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.974702   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.974707   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.974714   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.974719   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.974724   28549 round_trippers.go:580]     Audit-Id: ef165d66-403e-44a6-a74f-1f7c681d97bc
	I0906 15:10:21.974728   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.975644   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.975938   28549 node_ready.go:49] node "multinode-20220906150606-22187" has status "Ready":"True"
	I0906 15:10:21.975947   28549 node_ready.go:38] duration metric: took 4.323366ms waiting for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:10:21.975956   28549 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:22.030027   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:22.037969   28549 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:10:22.037982   28549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:10:22.038050   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:22.102350   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:22.120019   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:10:22.141023   28549 request.go:533] Waited for 165.018627ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:22.141050   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:22.141055   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.141062   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.141067   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.144654   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:22.144666   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.144672   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.144677   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.144690   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.144698   28549 round_trippers.go:580]     Audit-Id: a534ebf2-8dbd-490d-b160-c174b4e6a83d
	I0906 15:10:22.144704   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.144711   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.146486   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85156 chars]
	I0906 15:10:22.148977   28549 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:22.191688   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:10:22.314182   28549 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0906 15:10:22.315754   28549 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0906 15:10:22.318148   28549 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:10:22.320455   28549 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:10:22.322302   28549 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0906 15:10:22.329110   28549 command_runner.go:130] > pod/storage-provisioner configured
	I0906 15:10:22.341368   28549 request.go:533] Waited for 192.339579ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:22.341395   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:22.341400   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.341406   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.341412   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.343969   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:22.343980   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.343985   28549 round_trippers.go:580]     Audit-Id: 8f71dfd8-fa7f-4006-8ad1-c3455d457af4
	I0906 15:10:22.343990   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.343996   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.344007   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.344013   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.344017   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.344084   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:22.373003   28549 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0906 15:10:22.399785   28549 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:10:22.441547   28549 addons.go:414] enableAddons completed in 685.350332ms
	I0906 15:10:22.541331   28549 request.go:533] Waited for 196.900226ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:22.541371   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:22.541378   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.541389   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.541401   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.545111   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:22.545121   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.545128   28549 round_trippers.go:580]     Audit-Id: 409f4ed1-e89b-402a-91ea-7f4175686da5
	I0906 15:10:22.545135   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.545141   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.545146   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.545151   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.545156   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.545209   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:23.047744   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:23.047757   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.047766   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.047773   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.050885   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:23.050896   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.050901   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.050906   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.050911   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.050915   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.050921   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.050925   28549 round_trippers.go:580]     Audit-Id: b3ff731a-ff9c-4c69-847d-1b5a9b396a65
	I0906 15:10:23.051006   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:23.051315   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:23.051320   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.051326   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.051332   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.053875   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:23.053884   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.053889   28549 round_trippers.go:580]     Audit-Id: 7ec5765f-3aca-45e1-8c29-7d2dc96c5a7a
	I0906 15:10:23.053897   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.053902   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.053907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.053912   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.053916   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.053970   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:23.547705   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:23.547725   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.547737   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.547747   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.551309   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:23.551321   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.551326   28549 round_trippers.go:580]     Audit-Id: c64181ec-2202-499b-9929-b74eb04826c6
	I0906 15:10:23.551331   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.551335   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.551340   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.551345   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.551350   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.551418   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:23.551715   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:23.551721   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.551728   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.551732   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.553846   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:23.553863   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.553874   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.553880   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.553886   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.553896   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.553901   28549 round_trippers.go:580]     Audit-Id: 18847763-bf51-42ab-8e69-5c6ef01ab3d2
	I0906 15:10:23.553907   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.554166   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.047595   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:24.047620   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.047632   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.047644   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.051205   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:24.051217   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.051229   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.051235   28549 round_trippers.go:580]     Audit-Id: ceda892f-cbec-465e-aa16-b7e6f1fe9680
	I0906 15:10:24.051239   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.051244   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.051250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.051255   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.051325   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:24.051619   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:24.051624   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.051630   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.051635   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.053360   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:24.053369   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.053374   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.053379   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.053385   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.053391   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.053398   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.053407   28549 round_trippers.go:580]     Audit-Id: 39df1b93-1028-45d1-9341-18c04de35913
	I0906 15:10:24.053451   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.547675   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:24.547699   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.547710   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.547719   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.551142   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:24.551154   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.551160   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.551164   28549 round_trippers.go:580]     Audit-Id: ce7cc79b-f327-4b32-a96f-42e36a612f80
	I0906 15:10:24.551170   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.551176   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.551185   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.551191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.551260   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:24.551545   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:24.551551   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.551557   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.551565   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.553257   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:24.553267   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.553272   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.553277   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.553281   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.553287   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.553291   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.553296   28549 round_trippers.go:580]     Audit-Id: f70e2d17-81c4-47d5-abd5-e12168f90656
	I0906 15:10:24.553342   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.553525   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:25.045767   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:25.045788   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.045801   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.045812   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.049305   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:25.049317   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.049323   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.049328   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.049333   28549 round_trippers.go:580]     Audit-Id: f02d307a-4829-43e6-86fb-ebff9064d8ce
	I0906 15:10:25.049338   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.049343   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.049347   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.049492   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:25.049797   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:25.049803   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.049809   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.049814   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.051825   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.051835   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.051840   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.051845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.051850   28549 round_trippers.go:580]     Audit-Id: 817a5bb4-470a-43b0-a07c-9c386d714dad
	I0906 15:10:25.051856   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.051862   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.051866   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.051981   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:25.547508   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:25.547520   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.547526   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.547531   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.550038   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.550059   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.550066   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.550071   28549 round_trippers.go:580]     Audit-Id: d9c75459-d02a-4a31-bd3e-ca2d2df40f69
	I0906 15:10:25.550076   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.550081   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.550086   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.550090   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.550151   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:25.550434   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:25.550441   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.550447   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.550463   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.552756   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.552768   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.552774   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.552779   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.552784   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.552789   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.552793   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.552798   28549 round_trippers.go:580]     Audit-Id: f06b59d7-7628-430f-b828-f162baf7f454
	I0906 15:10:25.552918   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.045817   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:26.045836   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.045844   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.045851   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.049198   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:26.049212   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.049217   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.049234   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.049242   28549 round_trippers.go:580]     Audit-Id: d640682b-c1fb-4b44-a564-1af47273b749
	I0906 15:10:26.049247   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.049257   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.049262   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.049322   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:26.049612   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:26.049618   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.049624   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.049629   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.051405   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:26.051415   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.051421   28549 round_trippers.go:580]     Audit-Id: 1b9b50cb-af42-403e-8075-da1b906a9a82
	I0906 15:10:26.051425   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.051429   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.051434   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.051440   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.051444   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.051670   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.547675   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:26.547707   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.547721   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.547732   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.553980   28549 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:10:26.553993   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.553999   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.554004   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.554008   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.554013   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.554018   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.554022   28549 round_trippers.go:580]     Audit-Id: 0a650b8f-fadb-400d-9444-948e9d96fb33
	I0906 15:10:26.554090   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:26.554393   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:26.554399   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.554410   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.554416   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.556213   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:26.556222   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.556228   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.556236   28549 round_trippers.go:580]     Audit-Id: 8a52147c-3b7d-42b6-8249-7ca52167e7d2
	I0906 15:10:26.556241   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.556245   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.556250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.556254   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.556304   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.556484   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:27.047629   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:27.047653   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.047665   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.047675   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.051099   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:27.051112   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.051120   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.051125   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.051139   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.051150   28549 round_trippers.go:580]     Audit-Id: 1fdc61ac-9ce4-44b2-aee4-ebff17d0b5ea
	I0906 15:10:27.051157   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.051164   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.051364   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:27.051655   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:27.051661   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.051668   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.051674   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.053609   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:27.053618   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.053624   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.053630   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.053637   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.053644   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.053649   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.053654   28549 round_trippers.go:580]     Audit-Id: 9883a2fb-a208-48c4-9be0-9feb9e4757d1
	I0906 15:10:27.053729   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:27.546606   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:27.546631   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.546642   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.546652   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.550034   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:27.550046   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.550052   28549 round_trippers.go:580]     Audit-Id: 86834fcd-92cd-4477-995e-e0275f298ff0
	I0906 15:10:27.550057   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.550061   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.550066   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.550083   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.550093   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.550176   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:27.550471   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:27.550477   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.550483   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.550488   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.552295   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:27.552303   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.552308   28549 round_trippers.go:580]     Audit-Id: 2a5da242-f201-43e7-941f-80560f4a8531
	I0906 15:10:27.552313   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.552318   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.552322   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.552327   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.552332   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.552375   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:28.047524   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:28.047537   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.047543   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.047548   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.050113   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.050123   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.050128   28549 round_trippers.go:580]     Audit-Id: a5548866-3ce1-4641-a000-02cafe90c523
	I0906 15:10:28.050133   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.050140   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.050145   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.050160   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.050167   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.050238   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:28.050537   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:28.050543   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.050549   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.050555   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.052670   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.052682   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.052688   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.052695   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.052700   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.052705   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.052710   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.052714   28549 round_trippers.go:580]     Audit-Id: 5e36ad67-1ba1-4349-a16a-2aa45c39189c
	I0906 15:10:28.052765   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:28.547571   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:28.547592   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.547605   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.547614   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.550674   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:28.550687   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.550692   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.550697   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.550701   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.550705   28549 round_trippers.go:580]     Audit-Id: 76cc177d-8ce1-4351-bbc5-c5f029c98947
	I0906 15:10:28.550709   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.550713   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.550777   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:28.551070   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:28.551076   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.551082   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.551087   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.553448   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.553457   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.553462   28549 round_trippers.go:580]     Audit-Id: 9f22805b-8c49-43ab-86ef-f8906f39fe75
	I0906 15:10:28.553467   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.553472   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.553477   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.553482   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.553486   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.553539   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:29.045724   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:29.045741   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.045750   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.045757   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.048810   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:29.048822   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.048827   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.048833   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.048845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.048861   28549 round_trippers.go:580]     Audit-Id: 671cc099-c836-4753-970f-44af3300d499
	I0906 15:10:29.048871   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.048882   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.048954   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:29.049236   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:29.049242   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.049247   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.049254   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.050902   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:29.050911   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.050918   28549 round_trippers.go:580]     Audit-Id: 534ff1c7-b275-49ba-ac46-a7f89a06c446
	I0906 15:10:29.050925   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.050930   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.050935   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.050939   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.050944   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.051105   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:29.051290   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:29.546458   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:29.546477   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.546489   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.546499   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.550275   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:29.550292   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.550300   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.550307   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.550312   28549 round_trippers.go:580]     Audit-Id: f205fcec-7d4f-4d4c-b4b8-4de433a6f237
	I0906 15:10:29.550319   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.550325   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.550371   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.550553   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:29.550840   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:29.550846   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.550852   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.550857   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.552963   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:29.552972   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.552978   28549 round_trippers.go:580]     Audit-Id: 7dbf2f45-d4ad-4cda-9601-593021a5e75b
	I0906 15:10:29.552983   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.552988   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.552992   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.552999   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.553004   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.553051   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:30.047306   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:30.047325   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.047334   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.047341   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.050441   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:30.050453   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.050458   28549 round_trippers.go:580]     Audit-Id: 3875b2e7-3248-49ed-8e9c-c3b38ad3dcb6
	I0906 15:10:30.050463   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.050467   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.050471   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.050476   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.050481   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.050543   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:30.050834   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:30.050839   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.050845   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.050850   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.052749   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:30.052757   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.052762   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.052766   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.052772   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.052776   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.052780   28549 round_trippers.go:580]     Audit-Id: 770c938f-79c9-4705-a671-36b7d435a6d8
	I0906 15:10:30.052785   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.052828   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:30.547549   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:30.547560   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.547567   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.547573   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.550151   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:30.550161   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.550168   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.550174   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.550180   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.550185   28549 round_trippers.go:580]     Audit-Id: 664f2b14-e009-4571-820a-086420394757
	I0906 15:10:30.550190   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.550195   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.550251   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:30.550527   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:30.550534   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.550540   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.550546   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.552422   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:30.552431   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.552436   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.552441   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.552446   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.552450   28549 round_trippers.go:580]     Audit-Id: 5ad32fb8-2a5b-42a9-91bc-5d2faa5944c6
	I0906 15:10:30.552456   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.552460   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.552524   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:31.046220   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:31.046251   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.046263   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.046272   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.048835   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:31.048845   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.048852   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.048857   28549 round_trippers.go:580]     Audit-Id: 467c7b64-393c-49ab-b2c8-6306470b8bb5
	I0906 15:10:31.048863   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.048867   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.048874   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.048879   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.048941   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:31.049234   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:31.049240   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.049246   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.049251   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.051143   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:31.051153   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.051159   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.051164   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.051168   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.051173   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.051177   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.051182   28549 round_trippers.go:580]     Audit-Id: 79554f34-2af1-4936-9e6b-20db7ee159e1
	I0906 15:10:31.051604   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:31.051790   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:31.547574   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:31.547593   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.547601   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.547608   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.550721   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:31.550735   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.550740   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.550745   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.550753   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.550758   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.550765   28549 round_trippers.go:580]     Audit-Id: 75ee567f-f7cf-412d-9f0b-3e5b78432f4f
	I0906 15:10:31.550770   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.550829   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:31.551119   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:31.551125   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.551131   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.551137   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.552911   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:31.552920   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.552925   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.552930   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.552935   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.552940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.552945   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.552949   28549 round_trippers.go:580]     Audit-Id: 4db668bf-eb8d-4c7a-985e-f281273e273f
	I0906 15:10:31.552993   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:32.046555   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:32.046570   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.046578   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.046585   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.049435   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:32.049445   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.049451   28549 round_trippers.go:580]     Audit-Id: a325082a-9555-4433-8d69-4b4e47d01200
	I0906 15:10:32.049456   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.049461   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.049465   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.049472   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.049477   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.049758   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:32.050051   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:32.050057   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.050063   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.050068   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.051928   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:32.051936   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.051943   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.051948   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.051953   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.051958   28549 round_trippers.go:580]     Audit-Id: ea01f728-d55d-4c3d-ae85-812fc7eda3c8
	I0906 15:10:32.051966   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.051993   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.052374   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:32.546640   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:32.546655   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.546664   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.546672   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.549907   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:32.549922   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.549929   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.549940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.549947   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.549954   28549 round_trippers.go:580]     Audit-Id: 53e5e02f-84f7-4150-9e5b-3df0b9d7800d
	I0906 15:10:32.549958   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.549963   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.550059   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:32.550415   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:32.550421   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.550426   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.550432   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.552338   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:32.552347   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.552352   28549 round_trippers.go:580]     Audit-Id: 50334f53-ef2a-40fc-805c-07f3edf00919
	I0906 15:10:32.552357   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.552362   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.552366   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.552371   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.552376   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.552426   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.045575   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:33.045595   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.045619   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.045649   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.049069   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:33.049087   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.049093   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.049097   28549 round_trippers.go:580]     Audit-Id: ea530799-0317-4ceb-b493-50ecb637a3db
	I0906 15:10:33.049102   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.049106   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.049111   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.049116   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.049174   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:33.049468   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:33.049474   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.049479   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.049484   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.051171   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:33.051185   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.051191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.051197   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.051203   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.051210   28549 round_trippers.go:580]     Audit-Id: f954eac1-cdb1-4d9d-8da1-3c9775bc8a8b
	I0906 15:10:33.051220   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.051227   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.051416   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.545976   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:33.545992   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.546002   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.546009   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.549335   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:33.549345   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.549350   28549 round_trippers.go:580]     Audit-Id: 9101f0e7-d06a-4240-a771-85d25578713b
	I0906 15:10:33.549355   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.549361   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.549367   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.549374   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.549381   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.549542   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:33.549843   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:33.549849   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.549857   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.549862   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.551791   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:33.551801   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.551806   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.551810   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.551815   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.551819   28549 round_trippers.go:580]     Audit-Id: 9f76bf75-c884-4c90-9aa5-85ebacfb4245
	I0906 15:10:33.551824   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.551828   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.551891   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.552082   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:34.045997   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:34.046012   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.046023   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.046031   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.049256   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:34.049269   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.049274   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.049280   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.049285   28549 round_trippers.go:580]     Audit-Id: d53b5e7e-c6f7-43ab-88cf-d3bf7706ce8a
	I0906 15:10:34.049293   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.049299   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.049303   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.049368   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:34.049667   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:34.049672   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.049679   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.049684   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.051448   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:34.051457   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.051465   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.051472   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.051478   28549 round_trippers.go:580]     Audit-Id: e267c844-e59c-40e9-bebe-a92c1e074ba4
	I0906 15:10:34.051486   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.051492   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.051499   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.051696   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:34.546505   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:34.546521   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.546532   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.546539   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.550033   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:34.550045   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.550050   28549 round_trippers.go:580]     Audit-Id: 4f2787ac-b385-4170-8379-25aecd67d2e0
	I0906 15:10:34.550059   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.550064   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.550069   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.550073   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.550078   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.550161   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:34.550463   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:34.550468   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.550474   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.550480   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.552575   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:34.552584   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.552591   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.552596   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.552601   28549 round_trippers.go:580]     Audit-Id: 92b5e469-c0b3-4dc0-9c56-09aaa89cc003
	I0906 15:10:34.552605   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.552610   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.552615   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.552666   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.047684   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:35.047703   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.047715   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.047724   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.051578   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:35.051595   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.051603   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.051610   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.051620   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.051635   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.051643   28549 round_trippers.go:580]     Audit-Id: 22bc624f-0864-4015-9f75-f1133bf150ac
	I0906 15:10:35.051653   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.051859   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:35.052234   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:35.052240   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.052246   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.052253   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.054096   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:35.054105   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.054110   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.054115   28549 round_trippers.go:580]     Audit-Id: bb03e78d-1410-42c6-b1ac-54386baca4be
	I0906 15:10:35.054119   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.054124   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.054132   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.054137   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.054185   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.547171   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:35.547191   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.547212   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.547222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.549574   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:35.549584   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.549595   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.549600   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.549604   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.549610   28549 round_trippers.go:580]     Audit-Id: 678334ab-8f6c-47e7-a5e8-dc18f1f08b05
	I0906 15:10:35.549616   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.549624   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.549965   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:35.550262   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:35.550269   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.550277   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.550283   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.552467   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:35.552478   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.552486   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.552492   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.552500   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.552506   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.552511   28549 round_trippers.go:580]     Audit-Id: 3d2124de-f001-4ec1-9142-5a7d3352e969
	I0906 15:10:35.552515   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.552840   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.553062   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:36.045660   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:36.045673   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.045688   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.045699   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.048142   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:36.048151   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.048162   28549 round_trippers.go:580]     Audit-Id: 6e3209f9-0db0-4795-896a-f38de8787387
	I0906 15:10:36.048169   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.048179   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.048187   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.048191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.048197   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.048494   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:36.048785   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:36.048791   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.048796   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.048802   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.050689   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:36.050698   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.050703   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.050708   28549 round_trippers.go:580]     Audit-Id: 6a2b0de6-9fe6-4aa1-856f-d38a6bcf3e5c
	I0906 15:10:36.050712   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.050717   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.050721   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.050726   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.051033   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:36.546262   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:36.546280   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.546292   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.546301   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.549350   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:36.549361   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.549366   28549 round_trippers.go:580]     Audit-Id: 9fa66cb6-3b5e-4e2d-a1f9-cd18131ba438
	I0906 15:10:36.549371   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.549376   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.549380   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.549385   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.549396   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.549598   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:36.549914   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:36.549921   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.549926   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.549932   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.552003   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:36.552011   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.552016   28549 round_trippers.go:580]     Audit-Id: ee04c94c-32d4-4998-807f-db8aa6e3b72d
	I0906 15:10:36.552023   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.552027   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.552032   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.552036   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.552042   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.552087   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:37.047532   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:37.047551   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.047562   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.047572   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.051775   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:37.051785   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.051791   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.051795   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.051802   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.051815   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.051827   28549 round_trippers.go:580]     Audit-Id: 22681f2f-53ec-4702-b6b0-56ea34de7585
	I0906 15:10:37.051836   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.051946   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:37.052233   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:37.052239   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.052245   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.052250   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.054240   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:37.054249   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.054254   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.054260   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.054265   28549 round_trippers.go:580]     Audit-Id: 3c7dd69f-b528-41ce-b06c-66786b2aafc2
	I0906 15:10:37.054270   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.054275   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.054279   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.054333   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:37.547058   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:37.547086   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.547098   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.547107   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.550208   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:37.550220   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.550225   28549 round_trippers.go:580]     Audit-Id: e2c89126-c028-4e0c-bc00-e7539f1a26d1
	I0906 15:10:37.550230   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.550239   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.550245   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.550250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.550254   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.550313   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:37.550603   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:37.550608   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.550614   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.550619   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.552512   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:37.552521   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.552526   28549 round_trippers.go:580]     Audit-Id: 4ee0e1a6-dbc5-4b9d-bea3-00c1f60dc9cf
	I0906 15:10:37.552534   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.552540   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.552545   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.552553   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.552559   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.552752   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:38.047527   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:38.047543   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.047552   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.047559   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.050269   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.050282   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.050290   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.050295   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.050300   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.050306   28549 round_trippers.go:580]     Audit-Id: 576a0985-641c-457a-8644-259361efd747
	I0906 15:10:38.050312   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.050316   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.050443   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:38.050729   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:38.050736   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.050742   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.050747   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.052979   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.052987   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.052994   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.052998   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.053003   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.053008   28549 round_trippers.go:580]     Audit-Id: cad48e58-3124-45d0-8ad3-133aa9249993
	I0906 15:10:38.053012   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.053017   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.053382   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:38.053556   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:38.545684   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:38.545709   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.545721   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.545730   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.549585   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:38.549598   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.549614   28549 round_trippers.go:580]     Audit-Id: b68bb6d5-0fdd-4822-a879-2fcd9e707b81
	I0906 15:10:38.549632   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.549639   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.549646   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.549674   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.549688   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.549952   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:38.550249   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:38.550260   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.550267   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.550274   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.552337   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.552345   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.552350   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.552354   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.552360   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.552364   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.552368   28549 round_trippers.go:580]     Audit-Id: c9212d3a-c7e7-4e72-bcf0-002c16d22a98
	I0906 15:10:38.552379   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.552423   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:39.047659   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:39.047679   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.047691   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.047700   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.050926   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:39.050940   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.050945   28549 round_trippers.go:580]     Audit-Id: c7cb2505-08dd-4a84-b27b-d67fbba54924
	I0906 15:10:39.050950   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.050954   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.050958   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.050962   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.050967   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.051024   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:39.051311   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:39.051317   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.051323   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.051327   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.053035   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:39.053045   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.053060   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.053071   28549 round_trippers.go:580]     Audit-Id: f2ebf6d9-45b4-43ed-9da1-18fd535e79c5
	I0906 15:10:39.053077   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.053088   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.053095   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.053104   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.053375   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:39.545651   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:39.545700   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.545709   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.545716   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.548427   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:39.548438   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.548444   28549 round_trippers.go:580]     Audit-Id: 50e1d6ce-978a-4669-8228-80396ff7e22f
	I0906 15:10:39.548453   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.548460   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.548468   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.548476   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.548483   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.548715   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:39.549007   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:39.549016   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.549022   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.549027   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.550835   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:39.550845   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.550851   28549 round_trippers.go:580]     Audit-Id: 481beb84-7cc0-46c0-9065-f2492622bb85
	I0906 15:10:39.550861   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.550867   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.550874   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.550880   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.550885   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.551309   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.047259   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:40.047284   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.047296   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.047307   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.051101   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:40.051116   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.051131   28549 round_trippers.go:580]     Audit-Id: bade95fa-d7f2-4ac1-9a06-f68fc9daead1
	I0906 15:10:40.051140   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.051147   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.051154   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.051160   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.051166   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.051730   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:40.052024   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.052030   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.052036   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.052041   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.053899   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.053910   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.053916   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.053921   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.053925   28549 round_trippers.go:580]     Audit-Id: e6de9ba4-7a91-4f74-976f-6eb4c96856ee
	I0906 15:10:40.053930   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.053937   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.053944   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.054186   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.054375   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:40.547874   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:40.547897   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.547911   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.547922   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.552020   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:40.552032   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.552037   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.552042   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.552046   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.552050   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.552055   28549 round_trippers.go:580]     Audit-Id: faa665bc-4607-45c0-b283-c0d0a3f40061
	I0906 15:10:40.552060   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.552117   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6564 chars]
	I0906 15:10:40.552408   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.552414   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.552421   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.552427   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.554339   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.554348   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.554353   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.554358   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.554363   28549 round_trippers.go:580]     Audit-Id: 10600d2f-bbcb-485e-9490-30df3093b6fb
	I0906 15:10:40.554367   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.554372   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.554376   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.554475   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.554658   28549 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.554667   28549 pod_ready.go:81] duration metric: took 18.405611621s waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.554673   28549 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.554698   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:10:40.554702   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.554708   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.554714   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.556540   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.556549   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.556555   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.556560   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.556565   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.556569   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.556574   28549 round_trippers.go:580]     Audit-Id: 51e73d65-bf86-4fbd-8df5-6011368361db
	I0906 15:10:40.556578   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.556663   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"765","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 6113 chars]
	I0906 15:10:40.556875   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.556880   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.556888   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.556894   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.558614   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.558622   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.558627   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.558632   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.558637   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.558641   28549 round_trippers.go:580]     Audit-Id: bec0a7c4-a810-4897-aa54-080e7a79cd84
	I0906 15:10:40.558646   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.558650   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.558701   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.558880   28549 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.558885   28549 pod_ready.go:81] duration metric: took 4.207291ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.558895   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.558923   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:40.558927   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.558933   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.558939   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.560775   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.560784   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.560790   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.560795   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.560800   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.560804   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.560810   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.560814   28549 round_trippers.go:580]     Audit-Id: 899bf322-5d44-4e62-b581-cae28da40437
	I0906 15:10:40.561103   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"793","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8471 chars]
	I0906 15:10:40.561368   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.561374   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.561381   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.561388   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.563252   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.563259   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.563264   28549 round_trippers.go:580]     Audit-Id: 00c110af-9bcc-43ce-9a93-1f6997127e1b
	I0906 15:10:40.563269   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.563274   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.563281   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.563287   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.563291   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.563337   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.563523   28549 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.563529   28549 pod_ready.go:81] duration metric: took 4.629465ms waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.563535   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.563559   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:10:40.563563   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.563569   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.563574   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.565277   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.565286   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.565291   28549 round_trippers.go:580]     Audit-Id: 6f5a9db0-e549-4203-8cbd-cb94fcae6727
	I0906 15:10:40.565297   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.565301   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.565306   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.565310   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.565315   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.565371   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"768","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 8044 chars]
	I0906 15:10:40.565617   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.565622   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.565628   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.565633   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.567245   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.567257   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.567262   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.567268   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.567272   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.567277   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.567282   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.567286   28549 round_trippers.go:580]     Audit-Id: 17d3fb77-1ff2-4b0a-a6a7-b40f88e027a4
	I0906 15:10:40.567355   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.567535   28549 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.567543   28549 pod_ready.go:81] duration metric: took 4.002983ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.567551   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.567576   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:10:40.567580   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.567585   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.567591   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.569377   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.569385   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.569390   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.569397   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.569401   28549 round_trippers.go:580]     Audit-Id: 056f7804-ce91-4dd8-a5ca-ac09f2de9214
	I0906 15:10:40.569405   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.569410   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.569415   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.569457   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"672","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5772 chars]
	I0906 15:10:40.569692   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:10:40.569698   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.569704   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.569709   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.571346   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.571806   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.571821   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.571826   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.571833   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.571840   28549 round_trippers.go:580]     Audit-Id: 494f365f-854e-46f0-a8c1-9e5e2539cb8b
	I0906 15:10:40.571847   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.571852   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.571982   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m03","uid":"268cefad-05d1-4e4b-b44e-2d8678e78e39","resourceVersion":"685","creationTimestamp":"2022-09-06T22:09:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:09:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostnam
e":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Upd [truncated 4408 chars]
	I0906 15:10:40.572363   28549 pod_ready.go:92] pod "kube-proxy-czbjx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.572372   28549 pod_ready.go:81] duration metric: took 4.815433ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.572386   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.748168   28549 request.go:533] Waited for 175.735325ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:40.748217   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:40.748225   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.748269   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.748285   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.752135   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:40.752151   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.752158   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.752165   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.752171   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.752177   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.752183   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.752190   28549 round_trippers.go:580]     Audit-Id: ed592aff-d284-4033-90e6-f21d3a7c3d5a
	I0906 15:10:40.752261   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"749","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5762 chars]
	I0906 15:10:40.949108   28549 request.go:533] Waited for 196.461342ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.949199   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.949206   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.949222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.949230   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.952211   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:40.952223   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.952229   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.952238   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.952243   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.952248   28549 round_trippers.go:580]     Audit-Id: b341cdee-9db9-4707-8c63-e9c124efc28f
	I0906 15:10:40.952254   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.952259   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.952439   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.952640   28549 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.952647   28549 pod_ready.go:81] duration metric: took 380.254549ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.952653   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.148076   28549 request.go:533] Waited for 195.384766ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:41.148153   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:41.148164   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.148175   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.148186   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.151456   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.151466   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.151471   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.151476   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.151481   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.151486   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.151491   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.151495   28549 round_trippers.go:580]     Audit-Id: ca28b027-c218-4c1d-81b5-0d3f8e13d505
	I0906 15:10:41.151545   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"476","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5565 chars]
	I0906 15:10:41.348714   28549 request.go:533] Waited for 196.910755ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:41.348783   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:41.348795   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.348806   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.348818   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.352555   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.352573   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.352580   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.352587   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.352594   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.352600   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.352606   28549 round_trippers.go:580]     Audit-Id: 1097b0c1-4d2a-494b-9ede-60cc95bcb0f8
	I0906 15:10:41.352611   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.352681   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05","resourceVersion":"602","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4591 chars]
	I0906 15:10:41.352956   28549 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:41.352986   28549 pod_ready.go:81] duration metric: took 400.326262ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.352993   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.549910   28549 request.go:533] Waited for 196.88201ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:41.549972   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:41.550005   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.550019   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.550032   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.553706   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.553722   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.553730   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.553738   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.553746   28549 round_trippers.go:580]     Audit-Id: dc707c21-d82b-4758-b0ae-8f0ce57bdcb2
	I0906 15:10:41.553752   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.553759   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.553766   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.553858   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"780","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4928 chars]
	I0906 15:10:41.749983   28549 request.go:533] Waited for 195.790388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:41.750090   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:41.750098   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.750114   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.750132   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.753925   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.753940   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.753947   28549 round_trippers.go:580]     Audit-Id: 2ca9c8db-f653-45d7-a86c-f23683ebdd7e
	I0906 15:10:41.753953   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.753960   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.753966   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.753972   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.753978   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.754225   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:41.754489   28549 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:41.754500   28549 pod_ready.go:81] duration metric: took 401.499125ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.754509   28549 pod_ready.go:38] duration metric: took 19.778479428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:41.754528   28549 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:10:41.754583   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:41.763591   28549 command_runner.go:130] > 1664
	I0906 15:10:41.764358   28549 api_server.go:71] duration metric: took 20.008105359s to wait for apiserver process to appear ...
	I0906 15:10:41.764367   28549 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:10:41.764374   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:41.770061   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 200:
	ok
	I0906 15:10:41.770090   28549 round_trippers.go:463] GET https://127.0.0.1:57200/version
	I0906 15:10:41.770095   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.770101   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.770108   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.770961   28549 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 15:10:41.770970   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.770975   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.770980   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.770985   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.770989   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.770994   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:41.770999   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.771004   28549 round_trippers.go:580]     Audit-Id: fd3a2c9b-f6d4-4525-a3eb-399fa18c42e3
	I0906 15:10:41.771106   28549 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:10:41.771131   28549 api_server.go:140] control plane version: v1.25.0
	I0906 15:10:41.771137   28549 api_server.go:130] duration metric: took 6.765742ms to wait for apiserver health ...
	I0906 15:10:41.771142   28549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:10:41.949934   28549 request.go:533] Waited for 178.751849ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:41.949975   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:41.949986   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.949999   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.950044   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.955316   28549 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:10:41.955328   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.955334   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.955344   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.955350   28549 round_trippers.go:580]     Audit-Id: 2bb44cf3-a985-49e9-9a82-5a328c0b13b2
	I0906 15:10:41.955354   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.955360   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.955368   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.956655   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85340 chars]
	I0906 15:10:41.958514   28549 system_pods.go:59] 12 kube-system pods found
	I0906 15:10:41.958526   28549 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:41.958530   28549 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:41.958534   28549 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:41.958537   28549 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:41.958541   28549 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:41.958544   28549 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:10:41.958548   28549 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:41.958552   28549 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:41.958555   28549 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:41.958558   28549 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:41.958562   28549 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:10:41.958569   28549 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:10:41.958574   28549 system_pods.go:74] duration metric: took 187.427949ms to wait for pod list to return data ...
	I0906 15:10:41.958579   28549 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:10:42.148967   28549 request.go:533] Waited for 190.331771ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/default/serviceaccounts
	I0906 15:10:42.149107   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/default/serviceaccounts
	I0906 15:10:42.149115   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.149124   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.149132   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.152372   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:42.152385   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.152390   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:42.152396   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.152402   28549 round_trippers.go:580]     Audit-Id: bdb3c494-9d48-4d7d-98a3-9f0dff362ae9
	I0906 15:10:42.152408   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.152415   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.152422   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.152427   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.152469   28549 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2535e7c3-51eb-44d2-8df8-c188db57dc73","resourceVersion":"310","creationTimestamp":"2022-09-06T22:06:47Z"}}]}
	I0906 15:10:42.152598   28549 default_sa.go:45] found service account: "default"
	I0906 15:10:42.152605   28549 default_sa.go:55] duration metric: took 194.021479ms for default service account to be created ...
	I0906 15:10:42.152610   28549 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:10:42.350016   28549 request.go:533] Waited for 197.352364ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:42.350052   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:42.350058   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.350096   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.350127   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.354324   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:42.354336   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.354342   28549 round_trippers.go:580]     Audit-Id: fdfa76a1-8991-489f-9d02-70af290c9326
	I0906 15:10:42.354348   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.354355   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.354361   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.354366   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.354371   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.356027   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85340 chars]
	I0906 15:10:42.357887   28549 system_pods.go:86] 12 kube-system pods found
	I0906 15:10:42.357897   28549 system_pods.go:89] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:42.357902   28549 system_pods.go:89] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:42.357907   28549 system_pods.go:89] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:42.357910   28549 system_pods.go:89] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:42.357915   28549 system_pods.go:89] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:42.357918   28549 system_pods.go:89] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:10:42.357923   28549 system_pods.go:89] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:42.357927   28549 system_pods.go:89] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:42.357931   28549 system_pods.go:89] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:42.357947   28549 system_pods.go:89] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:42.357953   28549 system_pods.go:89] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:10:42.357960   28549 system_pods.go:89] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:10:42.357968   28549 system_pods.go:126] duration metric: took 205.352812ms to wait for k8s-apps to be running ...
	I0906 15:10:42.357974   28549 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:10:42.358022   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:10:42.367066   28549 system_svc.go:56] duration metric: took 9.086665ms WaitForService to wait for kubelet.
	I0906 15:10:42.367077   28549 kubeadm.go:573] duration metric: took 20.610823777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:10:42.367089   28549 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:10:42.548377   28549 request.go:533] Waited for 181.186086ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:42.548416   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:42.548425   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.548435   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.548446   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.552376   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:42.552389   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.552395   28549 round_trippers.go:580]     Audit-Id: 9780444a-e8d4-40ee-b5af-fe67a45dd214
	I0906 15:10:42.552399   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.552405   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.552410   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.552414   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.552419   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.552533   28549 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16412 chars]
	I0906 15:10:42.552939   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552946   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552954   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552957   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552960   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552963   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552966   28549 node_conditions.go:105] duration metric: took 185.873701ms to run NodePressure ...
	I0906 15:10:42.552975   28549 start.go:216] waiting for startup goroutines ...
	I0906 15:10:42.553586   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:42.553649   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:42.575662   28549 out.go:177] * Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	I0906 15:10:42.618531   28549 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:10:42.639337   28549 out.go:177] * Pulling base image ...
	I0906 15:10:42.681568   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:10:42.681575   28549 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:10:42.681600   28549 cache.go:57] Caching tarball of preloaded images
	I0906 15:10:42.681762   28549 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:10:42.681782   28549 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:10:42.681908   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:42.745010   28549 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:10:42.745033   28549 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:10:42.745043   28549 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:10:42.745103   28549 start.go:364] acquiring machines lock for multinode-20220906150606-22187-m02: {Name:mk634e5142ae9a72af4ccf4e417277befcfbdc1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:10:42.745169   28549 start.go:368] acquired machines lock for "multinode-20220906150606-22187-m02" in 55.286µs
	I0906 15:10:42.745185   28549 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:10:42.745190   28549 fix.go:55] fixHost starting: m02
	I0906 15:10:42.745433   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:10:42.809416   28549 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187-m02: state=Stopped err=<nil>
	W0906 15:10:42.809436   28549 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:10:42.831180   28549 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	I0906 15:10:42.852985   28549 cli_runner.go:164] Run: docker start multinode-20220906150606-22187-m02
	I0906 15:10:43.188246   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:10:43.254114   28549 kic.go:415] container "multinode-20220906150606-22187-m02" state is running.
	I0906 15:10:43.254669   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:43.322971   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:43.323422   28549 machine.go:88] provisioning docker machine ...
	I0906 15:10:43.323435   28549 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187-m02"
	I0906 15:10:43.323493   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:43.391970   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:43.392155   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:43.392171   28549 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187-m02 && echo "multinode-20220906150606-22187-m02" | sudo tee /etc/hostname
	I0906 15:10:43.532110   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187-m02
	
	I0906 15:10:43.532191   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:43.597549   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:43.597725   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:43.597741   28549 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:10:43.712509   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:10:43.712526   28549 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:10:43.712537   28549 ubuntu.go:177] setting up certificates
	I0906 15:10:43.712547   28549 provision.go:83] configureAuth start
	I0906 15:10:43.712618   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:43.778739   28549 provision.go:138] copyHostCerts
	I0906 15:10:43.778803   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:10:43.778881   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:10:43.778892   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:10:43.778984   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:10:43.779145   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:10:43.779211   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:10:43.779217   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:10:43.779277   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:10:43.779395   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:10:43.779422   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:10:43.779427   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:10:43.779483   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:10:43.779601   28549 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187-m02]
	I0906 15:10:43.968716   28549 provision.go:172] copyRemoteCerts
	I0906 15:10:43.968773   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:10:43.968815   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.035889   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:44.132361   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:10:44.132426   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:10:44.151009   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:10:44.151085   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0906 15:10:44.167814   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:10:44.167874   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:10:44.184664   28549 provision.go:86] duration metric: configureAuth took 472.106773ms
	I0906 15:10:44.184678   28549 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:10:44.184844   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:44.184904   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.249656   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.249831   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.249841   28549 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:10:44.364803   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:10:44.364819   28549 ubuntu.go:71] root file system type: overlay
	I0906 15:10:44.364963   28549 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:10:44.365039   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.428787   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.428933   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.428985   28549 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:10:44.552546   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:10:44.552616   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.619197   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.619357   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.619370   28549 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:10:44.736783   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:10:44.736807   28549 machine.go:91] provisioned docker machine in 1.413364256s
	I0906 15:10:44.736814   28549 start.go:300] post-start starting for "multinode-20220906150606-22187-m02" (driver="docker")
	I0906 15:10:44.736822   28549 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:10:44.736883   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:10:44.736926   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.801413   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:44.881207   28549 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:10:44.884507   28549 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:10:44.884517   28549 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:10:44.884522   28549 command_runner.go:130] > ID=ubuntu
	I0906 15:10:44.884528   28549 command_runner.go:130] > ID_LIKE=debian
	I0906 15:10:44.884533   28549 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:10:44.884537   28549 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:10:44.884541   28549 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:10:44.884547   28549 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:10:44.884554   28549 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:10:44.884564   28549 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:10:44.884570   28549 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:10:44.884580   28549 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:10:44.884681   28549 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:10:44.884695   28549 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:10:44.884704   28549 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:10:44.884710   28549 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:10:44.884716   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:10:44.884820   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:10:44.884956   28549 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:10:44.884964   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:10:44.885135   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:10:44.892218   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:10:44.908508   28549 start.go:303] post-start completed in 171.683253ms
	I0906 15:10:44.908571   28549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:10:44.908621   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.972115   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.052330   28549 command_runner.go:130] > 12%
	I0906 15:10:45.052779   28549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:10:45.056934   28549 command_runner.go:130] > 49G
	I0906 15:10:45.057219   28549 fix.go:57] fixHost completed within 2.312018763s
	I0906 15:10:45.057231   28549 start.go:83] releasing machines lock for "multinode-20220906150606-22187-m02", held for 2.312047126s
	I0906 15:10:45.057313   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:45.142585   28549 out.go:177] * Found network options:
	I0906 15:10:45.163662   28549 out.go:177]   - NO_PROXY=192.168.58.2
	W0906 15:10:45.184811   28549 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 15:10:45.184863   28549 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 15:10:45.185006   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:10:45.185017   28549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:10:45.185059   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:45.185095   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:45.253383   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.253502   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.380356   28549 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:10:45.382029   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:10:45.397509   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:45.465920   28549 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:10:45.547928   28549 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:10:45.558771   28549 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:10:45.558783   28549 command_runner.go:130] > [Unit]
	I0906 15:10:45.558789   28549 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:10:45.558793   28549 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:10:45.558798   28549 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:10:45.558805   28549 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:10:45.558811   28549 command_runner.go:130] > Wants=network-online.target
	I0906 15:10:45.558819   28549 command_runner.go:130] > Requires=docker.socket
	I0906 15:10:45.558825   28549 command_runner.go:130] > StartLimitBurst=3
	I0906 15:10:45.558832   28549 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:10:45.558836   28549 command_runner.go:130] > [Service]
	I0906 15:10:45.558840   28549 command_runner.go:130] > Type=notify
	I0906 15:10:45.558843   28549 command_runner.go:130] > Restart=on-failure
	I0906 15:10:45.558847   28549 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0906 15:10:45.558853   28549 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:10:45.558861   28549 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:10:45.558867   28549 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:10:45.558874   28549 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:10:45.558888   28549 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:10:45.558894   28549 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:10:45.558900   28549 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:10:45.558909   28549 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:10:45.558916   28549 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:10:45.558920   28549 command_runner.go:130] > ExecStart=
	I0906 15:10:45.558933   28549 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:10:45.558937   28549 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:10:45.558943   28549 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:10:45.558948   28549 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:10:45.558952   28549 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:10:45.558955   28549 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:10:45.558958   28549 command_runner.go:130] > LimitCORE=infinity
	I0906 15:10:45.558963   28549 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:10:45.558967   28549 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:10:45.558971   28549 command_runner.go:130] > TasksMax=infinity
	I0906 15:10:45.558979   28549 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:10:45.558984   28549 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:10:45.558988   28549 command_runner.go:130] > Delegate=yes
	I0906 15:10:45.558998   28549 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:10:45.559001   28549 command_runner.go:130] > KillMode=process
	I0906 15:10:45.559006   28549 command_runner.go:130] > [Install]
	I0906 15:10:45.559010   28549 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:10:45.559711   28549 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:10:45.559761   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:10:45.568579   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:10:45.580371   28549 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:10:45.580381   28549 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:10:45.581381   28549 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:10:45.654593   28549 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:10:45.730421   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:45.797805   28549 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:10:46.009555   28549 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:10:46.075810   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:46.143357   28549 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:10:46.152782   28549 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:10:46.152854   28549 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:10:46.156531   28549 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:10:46.156543   28549 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:10:46.156566   28549 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 131         Links: 1
	I0906 15:10:46.156576   28549 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:10:46.156594   28549 command_runner.go:130] > Access: 2022-09-06 22:10:46.032110218 +0000
	I0906 15:10:46.156604   28549 command_runner.go:130] > Modify: 2022-09-06 22:10:45.483110267 +0000
	I0906 15:10:46.156611   28549 command_runner.go:130] > Change: 2022-09-06 22:10:45.484110267 +0000
	I0906 15:10:46.156616   28549 command_runner.go:130] >  Birth: -
	I0906 15:10:46.156701   28549 start.go:471] Will wait 60s for crictl version
	I0906 15:10:46.156746   28549 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:10:46.184350   28549 command_runner.go:130] > Version:  0.1.0
	I0906 15:10:46.184362   28549 command_runner.go:130] > RuntimeName:  docker
	I0906 15:10:46.184483   28549 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:10:46.184659   28549 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:10:46.187317   28549 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:10:46.187380   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:10:46.219973   28549 command_runner.go:130] > 20.10.17
	I0906 15:10:46.223574   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:10:46.256593   28549 command_runner.go:130] > 20.10.17
	I0906 15:10:46.302167   28549 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:10:46.324046   28549 out.go:177]   - env NO_PROXY=192.168.58.2
	I0906 15:10:46.345395   28549 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187-m02 dig +short host.docker.internal
	I0906 15:10:46.462614   28549 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:10:46.462714   28549 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:10:46.466882   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:10:46.476235   28549 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.3
	I0906 15:10:46.476355   28549 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:10:46.476403   28549 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:10:46.476410   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:10:46.476431   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:10:46.476448   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:10:46.476464   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:10:46.476592   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:10:46.476634   28549 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:10:46.476645   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:10:46.476691   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:10:46.476725   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:10:46.476754   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:10:46.476817   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:10:46.476853   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.476872   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.476886   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.477195   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:10:46.495550   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:10:46.514404   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:10:46.531072   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:10:46.548321   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:10:46.564579   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:10:46.580884   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:10:46.597302   28549 ssh_runner.go:195] Run: openssl version
	I0906 15:10:46.602256   28549 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:10:46.602613   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:10:46.610227   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.613913   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.614072   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.614115   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.618969   28549 command_runner.go:130] > 3ec20f2e
	I0906 15:10:46.619205   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:10:46.626953   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:10:46.634854   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638630   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638752   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638798   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.643667   28549 command_runner.go:130] > b5213941
	I0906 15:10:46.644135   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:10:46.651147   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:10:46.658802   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662678   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662755   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662801   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.667598   28549 command_runner.go:130] > 51391683
	I0906 15:10:46.667930   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:10:46.675148   28549 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:10:46.746440   28549 command_runner.go:130] > systemd
	I0906 15:10:46.751729   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:10:46.751748   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:10:46.751770   28549 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:10:46.751813   28549 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:10:46.751910   28549 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:10:46.751969   28549 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:10:46.752025   28549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:10:46.759093   28549 command_runner.go:130] > kubeadm
	I0906 15:10:46.759104   28549 command_runner.go:130] > kubectl
	I0906 15:10:46.759112   28549 command_runner.go:130] > kubelet
	I0906 15:10:46.759905   28549 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:10:46.759960   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0906 15:10:46.766908   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0906 15:10:46.779093   28549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:10:46.792909   28549 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:10:46.796497   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:10:46.805780   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:46.805960   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:46.805965   28549 start.go:285] JoinCluster: &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:10:46.806029   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 15:10:46.806072   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:46.869829   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:47.000276   28549 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:10:47.000317   28549 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:47.000337   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:47.000607   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0906 15:10:47.000650   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:47.066185   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:47.183827   28549 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0906 15:10:47.212638   28549 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cddz8, kube-system/kube-proxy-wnrrx
	I0906 15:10:50.222387   28549 command_runner.go:130] > node/multinode-20220906150606-22187-m02 cordoned
	I0906 15:10:50.222406   28549 command_runner.go:130] > pod "busybox-65db55d5d6-ppptb" has DeletionTimestamp older than 1 seconds, skipping
	I0906 15:10:50.222411   28549 command_runner.go:130] > node/multinode-20220906150606-22187-m02 drained
	I0906 15:10:50.222426   28549 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.221792359s)
	I0906 15:10:50.222438   28549 node.go:109] successfully drained node "m02"
	I0906 15:10:50.222760   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:50.222980   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:50.223238   28549 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0906 15:10:50.223263   28549 round_trippers.go:463] DELETE https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:50.223267   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:50.223273   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:50.223280   28549 round_trippers.go:473]     Content-Type: application/json
	I0906 15:10:50.223288   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:50.227252   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:50.227266   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:50.227275   28549 round_trippers.go:580]     Audit-Id: 8493c1a8-8349-4bb8-9e0c-5e91482b57d7
	I0906 15:10:50.227283   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:50.227288   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:50.227295   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:50.227300   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:50.227304   28549 round_trippers.go:580]     Content-Length: 185
	I0906 15:10:50.227309   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:50 GMT
	I0906 15:10:50.227322   28549 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220906150606-22187-m02","kind":"nodes","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05"}}
	I0906 15:10:50.227344   28549 node.go:125] successfully deleted node "m02"
	I0906 15:10:50.227352   28549 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:50.227363   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:50.227376   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:10:50.290880   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:10:50.401454   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:10:50.401470   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:10:50.421485   28549 command_runner.go:130] ! W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:50.421498   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:10:50.421518   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:10:50.421524   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:10:50.421534   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:10:50.421542   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:10:50.421553   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:10:50.421560   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:10:50.421590   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.421603   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:10:50.421612   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:10:50.458648   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:10:50.458670   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.458696   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.458716   28549 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.505493   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:01.505540   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:01.541153   28549 command_runner.go:130] ! W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:01.541381   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:01.565651   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:01.570349   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:01.630836   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:01.630848   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:01.654802   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:01.654814   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.657936   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:01.657949   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:01.657956   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:11:01.657990   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.657998   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:01.658009   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:01.692431   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:01.692452   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.692474   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.692487   28549 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.301254   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:23.301339   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:23.336882   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:23.435229   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:23.435245   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:11:23.452875   28549 command_runner.go:130] ! W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:23.452889   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:23.452898   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:23.452911   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:23.452918   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:23.452924   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:23.452934   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:23.452941   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:11:23.452974   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.452981   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:23.452988   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:23.490281   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:23.490294   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.490309   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.490321   28549 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.694943   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:49.694987   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:49.730356   28549 command_runner.go:130] ! W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:49.730479   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:49.753632   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:49.758227   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:49.814483   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:49.814497   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:49.839413   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:49.839426   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.842899   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:49.842911   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:49.842917   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:11:49.842942   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.842953   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:49.842964   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:49.879137   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:49.879154   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.879169   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.879179   28549 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.528439   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:12:21.528491   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:12:21.563322   28549 command_runner.go:130] ! W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:12:21.563499   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:12:21.591204   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:12:21.595725   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:12:21.650881   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:12:21.650910   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:12:21.674745   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:12:21.674757   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.677651   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:12:21.677663   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:12:21.677670   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:12:21.677703   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.677711   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:12:21.677719   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:12:21.714343   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:12:21.714359   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.714380   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.714391   28549 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.524499   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:13:08.524545   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:13:08.561083   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:13:08.658459   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:13:08.658486   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:13:08.678063   28549 command_runner.go:130] ! W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:08.678077   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:13:08.678089   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:13:08.678096   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:13:08.678102   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:13:08.678108   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:13:08.678118   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:13:08.678123   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:13:08.678154   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.678162   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:13:08.678170   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:13:08.715429   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:13:08.715448   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.715473   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.715491   28549 start.go:287] JoinCluster complete in 2m21.909026943s
	I0906 15:13:08.737406   28549 out.go:177] 
	W0906 15:13:08.758737   28549 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:13:08.758769   28549 out.go:239] * 
	* 
	W0906 15:13:08.759858   28549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:13:08.843265   28549 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-20220906150606-22187" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220906150606-22187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220906150606-22187
helpers_test.go:235: (dbg) docker inspect multinode-20220906150606-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf",
	        "Created": "2022-09-06T22:06:13.015437812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 86368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:09:53.511867943Z",
	            "FinishedAt": "2022-09-06T22:09:27.640967798Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/hostname",
	        "HostsPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/hosts",
	        "LogPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf-json.log",
	        "Name": "/multinode-20220906150606-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220906150606-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220906150606-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20220906150606-22187",
	                "Source": "/var/lib/docker/volumes/multinode-20220906150606-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220906150606-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220906150606-22187",
	                "name.minikube.sigs.k8s.io": "multinode-20220906150606-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd30496556585d7e3179b43c5c2291c19fd274c25d42b9a21e0f411a1941fdd9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57198"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57199"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57200"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dd3049655658",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220906150606-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f96b4439a54b",
	                        "multinode-20220906150606-22187"
	                    ],
	                    "NetworkID": "ffe171e224281ce06adeca6944e902dfd3e453d98c2cfc0a549b1b9fef9c84ec",
	                    "EndpointID": "ced139ce8a0ebd5288719c8482f8cb21f9ae711e3edffa0a6dcb9962053bf0a2",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20220906150606-22187 -n multinode-20220906150606-22187
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 logs -n 25: (3.370291246s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| Command |                                                                   Args                                                                   |            Profile             |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187-m02.txt |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 sudo cat                                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m03:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 sudo cat                                                        | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt                                           |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp testdata/cp-test.txt                                                                                   | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                                                              |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187-m03.txt |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 sudo cat                                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m02:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 sudo cat                                                        | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt                                           |                                |         |         |                     |                     |
	| node    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | node stop m03                                                                                                                            |                                |         |         |                     |                     |
	| node    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:09 PDT |
	|         | node start m03                                                                                                                           |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	| stop    | -p                                                                                                                                       | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT | 06 Sep 22 15:09 PDT |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	| start   | -p                                                                                                                                       | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	|         | --wait=true -v=8                                                                                                                         |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:13 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:09:52
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:09:52.249113   28549 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:09:52.249279   28549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:09:52.249284   28549 out.go:309] Setting ErrFile to fd 2...
	I0906 15:09:52.249288   28549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:09:52.249395   28549 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:09:52.249834   28549 out.go:303] Setting JSON to false
	I0906 15:09:52.265079   28549 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7763,"bootTime":1662494429,"procs":330,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:09:52.265176   28549 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:09:52.287908   28549 out.go:177] * [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:09:52.330043   28549 notify.go:193] Checking for updates...
	I0906 15:09:52.351753   28549 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:09:52.373885   28549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:52.395109   28549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:09:52.416813   28549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:09:52.438126   28549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:09:52.460653   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:09:52.460741   28549 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:09:52.529017   28549 docker.go:137] docker version: linux-20.10.17
	I0906 15:09:52.529142   28549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:09:52.657876   28549 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:09:52.596148613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:09:52.701583   28549 out.go:177] * Using the docker driver based on existing profile
	I0906 15:09:52.723683   28549 start.go:284] selected driver: docker
	I0906 15:09:52.723709   28549 start.go:808] validating driver "docker" against &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:52.723901   28549 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:09:52.724037   28549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:09:52.854484   28549 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:09:52.792922001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:09:52.856623   28549 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:09:52.856648   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:09:52.856657   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:09:52.856671   28549 start_flags.go:310] config:
	{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:52.900403   28549 out.go:177] * Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	I0906 15:09:52.921438   28549 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:09:52.943183   28549 out.go:177] * Pulling base image ...
	I0906 15:09:52.986303   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:09:52.986305   28549 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:09:52.986350   28549 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:09:52.986364   28549 cache.go:57] Caching tarball of preloaded images
	I0906 15:09:52.986482   28549 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:09:52.986502   28549 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:09:52.987047   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:09:53.047916   28549 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:09:53.047933   28549 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:09:53.047944   28549 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:09:53.048001   28549 start.go:364] acquiring machines lock for multinode-20220906150606-22187: {Name:mk1f646be94138ec52cb695dba30aa00d55e22df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:09:53.048114   28549 start.go:368] acquired machines lock for "multinode-20220906150606-22187" in 91.342µs
	I0906 15:09:53.048135   28549 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:09:53.048145   28549 fix.go:55] fixHost starting: 
	I0906 15:09:53.048402   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:09:53.110627   28549 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187: state=Stopped err=<nil>
	W0906 15:09:53.110654   28549 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:09:53.154328   28549 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187" ...
	I0906 15:09:53.175453   28549 cli_runner.go:164] Run: docker start multinode-20220906150606-22187
	I0906 15:09:53.507425   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:09:53.571161   28549 kic.go:415] container "multinode-20220906150606-22187" state is running.
	I0906 15:09:53.571743   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:53.638862   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:09:53.639261   28549 machine.go:88] provisioning docker machine ...
	I0906 15:09:53.639282   28549 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187"
	I0906 15:09:53.639365   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:53.704265   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:53.704456   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:53.704468   28549 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187 && echo "multinode-20220906150606-22187" | sudo tee /etc/hostname
	I0906 15:09:53.826717   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187
	
	I0906 15:09:53.826793   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:53.891178   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:53.891333   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:53.891347   28549 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:09:54.003154   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:09:54.003177   28549 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:09:54.003192   28549 ubuntu.go:177] setting up certificates
	I0906 15:09:54.003205   28549 provision.go:83] configureAuth start
	I0906 15:09:54.003273   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:54.129783   28549 provision.go:138] copyHostCerts
	I0906 15:09:54.129831   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:09:54.129904   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:09:54.129921   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:09:54.130043   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:09:54.130221   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:09:54.130250   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:09:54.130254   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:09:54.130317   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:09:54.130457   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:09:54.130483   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:09:54.130489   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:09:54.130549   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:09:54.130667   28549 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187]
	I0906 15:09:54.167995   28549 provision.go:172] copyRemoteCerts
	I0906 15:09:54.168061   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:09:54.168114   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.232559   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:54.314058   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:09:54.314145   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:09:54.332104   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:09:54.332177   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:09:54.352099   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:09:54.352169   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:09:54.369496   28549 provision.go:86] duration metric: configureAuth took 366.277095ms
	I0906 15:09:54.369509   28549 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:09:54.369685   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:09:54.369744   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.434492   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.434691   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.434702   28549 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:09:54.544696   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:09:54.544711   28549 ubuntu.go:71] root file system type: overlay
	I0906 15:09:54.544889   28549 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:09:54.544960   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.607160   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.607338   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.607390   28549 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:09:54.726430   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:09:54.726508   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.788587   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:09:54.788784   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57201 <nil> <nil>}
	I0906 15:09:54.788801   28549 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:09:54.903682   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:09:54.903701   28549 machine.go:91] provisioned docker machine in 1.264428825s
	I0906 15:09:54.903711   28549 start.go:300] post-start starting for "multinode-20220906150606-22187" (driver="docker")
	I0906 15:09:54.903716   28549 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:09:54.903789   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:09:54.903850   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:54.966693   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.047662   28549 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:09:55.050803   28549 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:09:55.050811   28549 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:09:55.050814   28549 command_runner.go:130] > ID=ubuntu
	I0906 15:09:55.050817   28549 command_runner.go:130] > ID_LIKE=debian
	I0906 15:09:55.050821   28549 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:09:55.050831   28549 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:09:55.050836   28549 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:09:55.050843   28549 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:09:55.050848   28549 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:09:55.050857   28549 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:09:55.050861   28549 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:09:55.050876   28549 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:09:55.050924   28549 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:09:55.050938   28549 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:09:55.050957   28549 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:09:55.050964   28549 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:09:55.050975   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:09:55.051087   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:09:55.051224   28549 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:09:55.051230   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:09:55.051374   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:09:55.058035   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:09:55.074524   28549 start.go:303] post-start completed in 170.804101ms
	I0906 15:09:55.074592   28549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:09:55.074639   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.137425   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.218052   28549 command_runner.go:130] > 12%!
	(MISSING)I0906 15:09:55.218120   28549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:09:55.222006   28549 command_runner.go:130] > 49G
	I0906 15:09:55.222235   28549 fix.go:57] fixHost completed within 2.174086462s
	I0906 15:09:55.222246   28549 start.go:83] releasing machines lock for "multinode-20220906150606-22187", held for 2.174119704s
	I0906 15:09:55.222314   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:09:55.285473   28549 ssh_runner.go:195] Run: systemctl --version
	I0906 15:09:55.285474   28549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:09:55.285549   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.285631   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:55.353413   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.353718   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:09:55.484608   28549 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:09:55.484656   28549 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0906 15:09:55.484670   28549 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0906 15:09:55.484778   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:09:55.491732   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:09:55.504103   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:55.573539   28549 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:09:55.650492   28549 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:09:55.660451   28549 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:09:55.660618   28549 command_runner.go:130] > [Unit]
	I0906 15:09:55.660629   28549 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:09:55.660636   28549 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:09:55.660644   28549 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:09:55.660650   28549 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:09:55.660653   28549 command_runner.go:130] > Wants=network-online.target
	I0906 15:09:55.660661   28549 command_runner.go:130] > Requires=docker.socket
	I0906 15:09:55.660666   28549 command_runner.go:130] > StartLimitBurst=3
	I0906 15:09:55.660673   28549 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:09:55.660678   28549 command_runner.go:130] > [Service]
	I0906 15:09:55.660683   28549 command_runner.go:130] > Type=notify
	I0906 15:09:55.660690   28549 command_runner.go:130] > Restart=on-failure
	I0906 15:09:55.660698   28549 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:09:55.660705   28549 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:09:55.660711   28549 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:09:55.660716   28549 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:09:55.660721   28549 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:09:55.660727   28549 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:09:55.660734   28549 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:09:55.660744   28549 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:09:55.660751   28549 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:09:55.660755   28549 command_runner.go:130] > ExecStart=
	I0906 15:09:55.660767   28549 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:09:55.660772   28549 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:09:55.660777   28549 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:09:55.660798   28549 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:09:55.660806   28549 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:09:55.660810   28549 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:09:55.660814   28549 command_runner.go:130] > LimitCORE=infinity
	I0906 15:09:55.660818   28549 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:09:55.660822   28549 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:09:55.660827   28549 command_runner.go:130] > TasksMax=infinity
	I0906 15:09:55.660830   28549 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:09:55.660835   28549 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:09:55.660839   28549 command_runner.go:130] > Delegate=yes
	I0906 15:09:55.660844   28549 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:09:55.660847   28549 command_runner.go:130] > KillMode=process
	I0906 15:09:55.660858   28549 command_runner.go:130] > [Install]
	I0906 15:09:55.660867   28549 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:09:55.661555   28549 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:09:55.661609   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:09:55.670610   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:09:55.682055   28549 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:09:55.682066   28549 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:09:55.682716   28549 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:09:55.745594   28549 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:09:55.809234   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:55.885824   28549 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:09:56.120243   28549 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:09:56.183495   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:09:56.248991   28549 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:09:56.258266   28549 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:09:56.258348   28549 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:09:56.262063   28549 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:09:56.262079   28549 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:09:56.262086   28549 command_runner.go:130] > Device: 96h/150d	Inode: 114         Links: 1
	I0906 15:09:56.262095   28549 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:09:56.262105   28549 command_runner.go:130] > Access: 2022-09-06 22:09:55.594302366 +0000
	I0906 15:09:56.262110   28549 command_runner.go:130] > Modify: 2022-09-06 22:09:55.594302366 +0000
	I0906 15:09:56.262115   28549 command_runner.go:130] > Change: 2022-09-06 22:09:55.595302366 +0000
	I0906 15:09:56.262119   28549 command_runner.go:130] >  Birth: -
	I0906 15:09:56.262197   28549 start.go:471] Will wait 60s for crictl version
	I0906 15:09:56.262239   28549 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:09:56.289764   28549 command_runner.go:130] > Version:  0.1.0
	I0906 15:09:56.289775   28549 command_runner.go:130] > RuntimeName:  docker
	I0906 15:09:56.289778   28549 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:09:56.289782   28549 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:09:56.291804   28549 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:09:56.291879   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:09:56.324013   28549 command_runner.go:130] > 20.10.17
	I0906 15:09:56.327098   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:09:56.359718   28549 command_runner.go:130] > 20.10.17
	I0906 15:09:56.406489   28549 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:09:56.406607   28549 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187 dig +short host.docker.internal
	I0906 15:09:56.527846   28549 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:09:56.527954   28549 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:09:56.532014   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:09:56.541444   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:56.605087   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:09:56.605164   28549 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:09:56.632176   28549 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:09:56.632190   28549 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:09:56.632195   28549 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:09:56.632202   28549 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:09:56.632206   28549 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:09:56.632211   28549 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:09:56.632214   28549 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:09:56.632220   28549 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:09:56.632224   28549 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:09:56.632228   28549 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:09:56.632231   28549 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:09:56.635153   28549 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:09:56.635172   28549 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:09:56.635303   28549 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:09:56.660686   28549 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:09:56.660699   28549 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:09:56.660703   28549 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:09:56.660707   28549 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:09:56.660710   28549 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:09:56.660714   28549 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:09:56.660718   28549 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:09:56.660733   28549 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:09:56.660737   28549 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:09:56.660741   28549 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:09:56.660754   28549 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:09:56.663751   28549 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:09:56.663767   28549 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:09:56.663845   28549 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:09:56.733245   28549 command_runner.go:130] > systemd
	I0906 15:09:56.736255   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:09:56.736268   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:09:56.736287   28549 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:09:56.736297   28549 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:09:56.736411   28549 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:09:56.736496   28549 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:09:56.736555   28549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:09:56.743040   28549 command_runner.go:130] > kubeadm
	I0906 15:09:56.743047   28549 command_runner.go:130] > kubectl
	I0906 15:09:56.743050   28549 command_runner.go:130] > kubelet
	I0906 15:09:56.743595   28549 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:09:56.743641   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:09:56.750261   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0906 15:09:56.763192   28549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:09:56.775146   28549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0906 15:09:56.787316   28549 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:09:56.790851   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:09:56.799949   28549 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.2
	I0906 15:09:56.800049   28549 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:09:56.800100   28549 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:09:56.800173   28549 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.key
	I0906 15:09:56.800238   28549 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key.cee25041
	I0906 15:09:56.800293   28549 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key
	I0906 15:09:56.800300   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 15:09:56.800320   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 15:09:56.800350   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 15:09:56.800368   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 15:09:56.800384   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:09:56.800398   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:09:56.800413   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:09:56.800428   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:09:56.800539   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:09:56.800576   28549 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:09:56.800592   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:09:56.800626   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:09:56.800663   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:09:56.800692   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:09:56.800752   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:09:56.800783   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:56.800805   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:09:56.800823   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:09:56.801304   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:09:56.818154   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:09:56.834407   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:09:56.850832   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:09:56.867454   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:09:56.883833   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:09:56.900099   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:09:56.916879   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:09:56.934296   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:09:56.951005   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:09:56.967840   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:09:56.984366   28549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:09:56.996732   28549 ssh_runner.go:195] Run: openssl version
	I0906 15:09:57.001487   28549 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:09:57.001802   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:09:57.009559   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013118   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013201   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.013240   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:09:57.017989   28549 command_runner.go:130] > b5213941
	I0906 15:09:57.018343   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:09:57.025210   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:09:57.033032   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036786   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036946   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.036984   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:09:57.041698   28549 command_runner.go:130] > 51391683
	I0906 15:09:57.042008   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:09:57.049449   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:09:57.056973   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060800   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060824   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.060862   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:09:57.065703   28549 command_runner.go:130] > 3ec20f2e
	I0906 15:09:57.066065   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:09:57.073101   28549 kubeadm.go:396] StartCluster: {Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:09:57.073206   28549 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:09:57.101666   28549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:09:57.108633   28549 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 15:09:57.108647   28549 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 15:09:57.108670   28549 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 15:09:57.108677   28549 command_runner.go:130] > member
	I0906 15:09:57.109421   28549 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:09:57.109435   28549 kubeadm.go:627] restartCluster start
	I0906 15:09:57.109481   28549 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:09:57.116223   28549 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.116281   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:09:57.179468   28549 kubeconfig.go:116] verify returned: extract IP: "multinode-20220906150606-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:57.179551   28549 kubeconfig.go:127] "multinode-20220906150606-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:09:57.179804   28549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:09:57.180492   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:09:57.180721   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:09:57.181039   28549 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 15:09:57.181209   28549 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:09:57.188647   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.188700   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.196805   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.398928   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.399097   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.408772   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.597668   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.597761   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.608757   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.798723   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.798862   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:57.808812   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:57.996893   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:57.996985   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.005735   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.198879   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.198959   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.208958   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.398351   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.398450   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.408754   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.598855   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.599021   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.608294   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.796970   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.797072   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:58.808361   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:58.997634   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:58.997814   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.007557   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.198962   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.199103   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.209185   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.398497   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.398622   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.408643   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.597533   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.597690   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.607164   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.798962   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.799094   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:09:59.810038   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:09:59.998952   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:09:59.999087   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.009819   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.199014   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:10:00.199147   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.208656   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.208666   28549 api_server.go:165] Checking apiserver status ...
	I0906 15:10:00.208709   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:10:00.216363   28549 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.216375   28549 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:10:00.216382   28549 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:10:00.216437   28549 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:10:00.243805   28549 command_runner.go:130] > df0852bc7a51
	I0906 15:10:00.243819   28549 command_runner.go:130] > 1ed0dda0b42e
	I0906 15:10:00.243823   28549 command_runner.go:130] > a34f733a43c2
	I0906 15:10:00.243826   28549 command_runner.go:130] > c307966101ca
	I0906 15:10:00.243830   28549 command_runner.go:130] > 3c2093315054
	I0906 15:10:00.243833   28549 command_runner.go:130] > fdc326cd3c6a
	I0906 15:10:00.243837   28549 command_runner.go:130] > 4e3670b1600d
	I0906 15:10:00.243841   28549 command_runner.go:130] > 6bd8b364f108
	I0906 15:10:00.243844   28549 command_runner.go:130] > 6d68f544bf54
	I0906 15:10:00.243851   28549 command_runner.go:130] > a165f2074320
	I0906 15:10:00.243854   28549 command_runner.go:130] > 28bc9837a510
	I0906 15:10:00.243857   28549 command_runner.go:130] > 33a1b253bd37
	I0906 15:10:00.243861   28549 command_runner.go:130] > 0c0974b47f92
	I0906 15:10:00.243865   28549 command_runner.go:130] > c27dff0f48e6
	I0906 15:10:00.243869   28549 command_runner.go:130] > 77d6030ab01b
	I0906 15:10:00.243874   28549 command_runner.go:130] > defb450e84c2
	I0906 15:10:00.246728   28549 docker.go:443] Stopping containers: [df0852bc7a51 1ed0dda0b42e a34f733a43c2 c307966101ca 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2]
	I0906 15:10:00.246801   28549 ssh_runner.go:195] Run: docker stop df0852bc7a51 1ed0dda0b42e a34f733a43c2 c307966101ca 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2
	I0906 15:10:00.271650   28549 command_runner.go:130] > df0852bc7a51
	I0906 15:10:00.271820   28549 command_runner.go:130] > 1ed0dda0b42e
	I0906 15:10:00.272033   28549 command_runner.go:130] > a34f733a43c2
	I0906 15:10:00.272042   28549 command_runner.go:130] > c307966101ca
	I0906 15:10:00.272050   28549 command_runner.go:130] > 3c2093315054
	I0906 15:10:00.272056   28549 command_runner.go:130] > fdc326cd3c6a
	I0906 15:10:00.272065   28549 command_runner.go:130] > 4e3670b1600d
	I0906 15:10:00.272297   28549 command_runner.go:130] > 6bd8b364f108
	I0906 15:10:00.272303   28549 command_runner.go:130] > 6d68f544bf54
	I0906 15:10:00.272318   28549 command_runner.go:130] > a165f2074320
	I0906 15:10:00.272323   28549 command_runner.go:130] > 28bc9837a510
	I0906 15:10:00.272328   28549 command_runner.go:130] > 33a1b253bd37
	I0906 15:10:00.272333   28549 command_runner.go:130] > 0c0974b47f92
	I0906 15:10:00.272338   28549 command_runner.go:130] > c27dff0f48e6
	I0906 15:10:00.272343   28549 command_runner.go:130] > 77d6030ab01b
	I0906 15:10:00.272352   28549 command_runner.go:130] > defb450e84c2
	I0906 15:10:00.275422   28549 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:10:00.285214   28549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:10:00.291920   28549 command_runner.go:130] > -rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	I0906 15:10:00.291931   28549 command_runner.go:130] > -rw------- 1 root root 5656 Sep  6 22:06 /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.291936   28549 command_runner.go:130] > -rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	I0906 15:10:00.291946   28549 command_runner.go:130] > -rw------- 1 root root 5600 Sep  6 22:06 /etc/kubernetes/scheduler.conf
	I0906 15:10:00.292869   28549 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:06 /etc/kubernetes/scheduler.conf
	
	I0906 15:10:00.292915   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:10:00.299598   28549 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:10:00.300311   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:10:00.306656   28549 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:10:00.307414   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.314205   28549 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.314263   28549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:10:00.321057   28549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:10:00.328298   28549 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:10:00.328346   28549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:10:00.334828   28549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:10:00.341880   28549 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:10:00.341893   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:00.380872   28549 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:10:00.380888   28549 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 15:10:00.380954   28549 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 15:10:00.381325   28549 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:10:00.382035   28549 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 15:10:00.382044   28549 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:10:00.382048   28549 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 15:10:00.382375   28549 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 15:10:00.382548   28549 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:10:00.383177   28549 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:10:00.383189   28549 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:10:00.383403   28549 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 15:10:00.386570   28549 command_runner.go:130] ! W0906 22:10:00.392914    1106 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:00.386587   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:00.426694   28549 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:10:00.589592   28549 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0906 15:10:00.685244   28549 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0906 15:10:00.936853   28549 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:10:01.134938   28549 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:10:01.139172   28549 command_runner.go:130] ! W0906 22:10:00.438679    1116 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.139201   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.189116   28549 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:10:01.189692   28549 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:10:01.189864   28549 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 15:10:01.259629   28549 command_runner.go:130] ! W0906 22:10:01.192033    1138 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.259647   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.299337   28549 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:10:01.299355   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:10:01.304593   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:10:01.305432   28549 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:10:01.308987   28549 command_runner.go:130] ! W0906 22:10:01.310921    1172 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.309011   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:01.360596   28549 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:10:01.366630   28549 command_runner.go:130] ! W0906 22:10:01.371856    1188 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:01.366667   28549 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:10:01.366730   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:01.913225   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:02.413164   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:02.423684   28549 command_runner.go:130] > 1664
	I0906 15:10:02.423862   28549 api_server.go:71] duration metric: took 1.057205507s to wait for apiserver process to appear ...
	I0906 15:10:02.423883   28549 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:10:02.423902   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:02.425131   28549 api_server.go:256] stopped: https://127.0.0.1:57200/healthz: Get "https://127.0.0.1:57200/healthz": EOF
	I0906 15:10:02.925542   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.360035   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:10:05.360049   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:10:05.425330   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.433768   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:05.433787   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:05.926806   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:05.933750   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:05.933765   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:06.425202   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:06.431557   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:10:06.431574   28549 api_server.go:102] status: https://127.0.0.1:57200/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:10:06.925298   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:06.931859   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 200:
	ok
	I0906 15:10:06.931916   28549 round_trippers.go:463] GET https://127.0.0.1:57200/version
	I0906 15:10:06.931921   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:06.931928   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:06.931934   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:06.938009   28549 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:10:06.938019   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:06.938024   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:06.938029   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:06.938034   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:06 GMT
	I0906 15:10:06.938040   28549 round_trippers.go:580]     Audit-Id: 1e243c70-94be-4fec-b6f9-31bf75252e92
	I0906 15:10:06.938044   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:06.938049   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:06.938054   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:06.938073   28549 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:10:06.938122   28549 api_server.go:140] control plane version: v1.25.0
	I0906 15:10:06.938129   28549 api_server.go:130] duration metric: took 4.5142281s to wait for apiserver health ...
	I0906 15:10:06.938134   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:10:06.938141   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:10:06.961636   28549 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 15:10:06.982476   28549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 15:10:06.987759   28549 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 15:10:06.987773   28549 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0906 15:10:06.987780   28549 command_runner.go:130] > Device: 8eh/142d	Inode: 267134      Links: 1
	I0906 15:10:06.987788   28549 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 15:10:06.987805   28549 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:10:06.987814   28549 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:10:06.987822   28549 command_runner.go:130] > Change: 2022-09-06 21:44:51.197359839 +0000
	I0906 15:10:06.987829   28549 command_runner.go:130] >  Birth: -
	I0906 15:10:06.988166   28549 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.0/kubectl ...
	I0906 15:10:06.988174   28549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0906 15:10:07.001486   28549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 15:10:08.001946   28549 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:10:08.005028   28549 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:10:08.008987   28549 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 15:10:08.020307   28549 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 15:10:08.030092   28549 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.02857736s)
	I0906 15:10:08.030120   28549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:10:08.030180   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:08.030185   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.030191   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.030197   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.034496   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:08.034509   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.034514   28549 round_trippers.go:580]     Audit-Id: 0076a17a-44ea-4fd7-be39-ccac2b826ad8
	I0906 15:10:08.034519   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.034530   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.034537   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.034543   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.034550   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.037686   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"720"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84179 chars]
	I0906 15:10:08.040647   28549 system_pods.go:59] 12 kube-system pods found
	I0906 15:10:08.040662   28549 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:08.040673   28549 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:08.040680   28549 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:08.040683   28549 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:08.040687   28549 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:08.040695   28549 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:10:08.040700   28549 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:08.040704   28549 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:08.040707   28549 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:08.040711   28549 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:08.040715   28549 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:10:08.040721   28549 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running
	I0906 15:10:08.040725   28549 system_pods.go:74] duration metric: took 10.600213ms to wait for pod list to return data ...
	I0906 15:10:08.040731   28549 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:10:08.040768   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:08.040772   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.040778   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.040784   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.043531   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.043544   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.043552   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.043561   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.043569   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.043574   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.043579   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.043583   28549 round_trippers.go:580]     Audit-Id: 98cf2dab-30f1-49e4-befe-b2dea3ce89db
	I0906 15:10:08.044185   28549 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"720"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16412 chars]
	I0906 15:10:08.044888   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044907   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044920   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044926   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044931   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:08.044939   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:08.044946   28549 node_conditions.go:105] duration metric: took 4.210209ms to run NodePressure ...
	I0906 15:10:08.044966   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:10:08.236877   28549 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 15:10:08.310612   28549 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 15:10:08.314079   28549 command_runner.go:130] ! W0906 22:10:08.133832    2389 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:08.314099   28549 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:10:08.314148   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0906 15:10:08.314153   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.314159   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.314165   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.317077   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.317089   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.317095   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.317127   28549 round_trippers.go:580]     Audit-Id: 34232c55-f461-4f54-8ef6-b8a79984f74c
	I0906 15:10:08.317133   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.317137   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.317142   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.317147   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.317380   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"368","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.adve [truncated 30664 chars]
	I0906 15:10:08.318110   28549 kubeadm.go:778] kubelet initialised
	I0906 15:10:08.318118   28549 kubeadm.go:779] duration metric: took 4.012273ms waiting for restarted kubelet to initialise ...
	I0906 15:10:08.318126   28549 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:08.318160   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:08.318165   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.318171   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.318177   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.321144   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.321157   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.321164   28549 round_trippers.go:580]     Audit-Id: ad350e2b-21d4-47e7-ad0f-330fa3160745
	I0906 15:10:08.321171   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.321177   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.321183   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.321188   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.321193   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.322905   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84179 chars]
	I0906 15:10:08.324801   28549 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.324848   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:08.324853   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.324859   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.324866   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.326845   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.326855   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.326860   28549 round_trippers.go:580]     Audit-Id: e11f06ae-6ad7-4233-a87e-19d865b0b514
	I0906 15:10:08.326865   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.326870   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.326878   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.326883   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.326888   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.326944   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"410","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6357 chars]
	I0906 15:10:08.327210   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.327216   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.327222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.327227   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.329072   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.329081   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.329087   28549 round_trippers.go:580]     Audit-Id: 7f7972d1-75fc-4c25-865f-6afa7f3961cb
	I0906 15:10:08.329092   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.329096   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.329101   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.329106   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.329111   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.329279   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.329466   28549 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:08.329471   28549 pod_ready.go:81] duration metric: took 4.658673ms waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.329477   28549 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.329503   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:10:08.329508   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.329513   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.329518   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.331343   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.331352   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.331358   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.331363   28549 round_trippers.go:580]     Audit-Id: 56583076-1ca9-4009-aeb0-929b451b72f4
	I0906 15:10:08.331368   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.331374   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.331378   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.331384   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.331694   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"368","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 5906 chars]
	I0906 15:10:08.331917   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.331923   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.331932   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.331937   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.333889   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.333898   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.333903   28549 round_trippers.go:580]     Audit-Id: d7dc63db-b0a6-44f1-8289-2df9fec46c77
	I0906 15:10:08.333908   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.333913   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.333917   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.333922   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.333927   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.334075   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.334247   28549 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:08.334253   28549 pod_ready.go:81] duration metric: took 4.77224ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.334262   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:08.334294   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:08.334299   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.334304   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.334309   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.336457   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:08.336464   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.336469   28549 round_trippers.go:580]     Audit-Id: 014453c6-2bf5-431e-871f-02eca14b5180
	I0906 15:10:08.336474   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.336479   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.336484   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.336488   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.336493   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.336579   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:08.336833   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.336838   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.336844   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.336850   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.338331   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.338339   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.338345   28549 round_trippers.go:580]     Audit-Id: 08d4e436-3650-40a8-adfa-91ed0e6bf3d6
	I0906 15:10:08.338349   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.338354   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.338361   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.338367   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.338371   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.338686   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:08.840390   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:08.840413   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.840443   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.840479   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.843856   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:08.843871   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.843882   28549 round_trippers.go:580]     Audit-Id: 1ef375e7-272d-459a-9173-a59617518416
	I0906 15:10:08.843892   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.843900   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.843907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.843913   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.843923   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.844440   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:08.844821   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:08.844830   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:08.844838   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:08.844845   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:08.846824   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:08.846833   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:08.846838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:08.846843   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:08 GMT
	I0906 15:10:08.846847   28549 round_trippers.go:580]     Audit-Id: a5d571c3-8969-4883-b1da-c116d2869e69
	I0906 15:10:08.846852   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:08.846856   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:08.846861   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:08.847019   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:09.339078   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:09.339104   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.339115   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.339125   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.342748   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:09.342765   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.342773   28549 round_trippers.go:580]     Audit-Id: c07f9b3f-e2dd-42f3-aba8-9879878b8e79
	I0906 15:10:09.342779   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.342787   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.342793   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.342800   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.342806   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.342924   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:09.343201   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:09.343206   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.343212   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.343218   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.345117   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:09.345128   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.345134   28549 round_trippers.go:580]     Audit-Id: 3344fb03-0980-490e-9032-7ce3e7279e77
	I0906 15:10:09.345141   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.345153   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.345163   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.345170   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.345175   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.345336   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:09.839071   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:09.839084   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.839107   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.839112   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.841411   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:09.841422   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.841427   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.841432   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.841437   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.841442   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.841446   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.841450   28549 round_trippers.go:580]     Audit-Id: 5114bc2f-d424-4c2d-9c30-140694b9ff92
	I0906 15:10:09.841561   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:09.841849   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:09.841855   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:09.841863   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:09.841868   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:09.843568   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:09.843578   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:09.843583   28549 round_trippers.go:580]     Audit-Id: ca742599-67ab-4897-a9bd-7cd08983bee4
	I0906 15:10:09.843588   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:09.843593   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:09.843597   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:09.843601   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:09.843605   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:09 GMT
	I0906 15:10:09.843878   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:10.339423   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:10.339446   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.339458   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.339468   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.343378   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:10.343392   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.343405   28549 round_trippers.go:580]     Audit-Id: adfd21fa-d007-4deb-99a9-384ce3521f5c
	I0906 15:10:10.343415   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.343429   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.343438   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.343445   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.343453   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.343556   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:10.343953   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:10.343962   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.343971   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.343987   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.345896   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:10.345905   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.345911   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.345918   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.345923   28549 round_trippers.go:580]     Audit-Id: 1e522e4c-de52-4d57-befc-ce3825897cc9
	I0906 15:10:10.345927   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.345935   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.345940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.345995   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:10.346184   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:10.839437   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:10.839457   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.839466   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.839473   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.842491   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:10.842503   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.842510   28549 round_trippers.go:580]     Audit-Id: 0d20f7dd-2af0-4648-b72e-9964414712f6
	I0906 15:10:10.842517   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.842523   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.842527   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.842533   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.842537   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.843307   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:10.845148   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:10.845157   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:10.845164   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:10.845169   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:10.847540   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:10.847551   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:10.847556   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:10.847561   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:10.847566   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:10.847570   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:10.847576   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:10 GMT
	I0906 15:10:10.847580   28549 round_trippers.go:580]     Audit-Id: 5dfb9918-56cd-4496-ac14-614468834a72
	I0906 15:10:10.847631   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:11.339107   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:11.339134   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.339177   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.339193   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.342129   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.342141   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.342150   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.342155   28549 round_trippers.go:580]     Audit-Id: 31ad3468-261e-4623-b9cc-4d24583e6bff
	I0906 15:10:11.342161   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.342165   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.342170   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.342174   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.342458   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:11.342740   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:11.342746   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.342752   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.342757   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.344571   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:11.344581   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.344586   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.344593   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.344599   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.344604   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.344608   28549 round_trippers.go:580]     Audit-Id: 8205c1a9-17f6-477e-b9f6-f44f045630f1
	I0906 15:10:11.344613   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.344651   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:11.839068   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:11.839088   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.839097   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.839118   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.841682   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.841693   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.841698   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.841715   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.841723   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.841728   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.841736   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.841744   28549 round_trippers.go:580]     Audit-Id: 074cf248-e2fa-418e-a383-8b506c3051de
	I0906 15:10:11.842110   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:11.842386   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:11.842392   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:11.842397   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:11.842402   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:11.844435   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:11.844445   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:11.844451   28549 round_trippers.go:580]     Audit-Id: 47b52112-289d-4d7b-b5be-5f511f033807
	I0906 15:10:11.844458   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:11.844466   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:11.844474   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:11.844481   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:11.844488   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:11 GMT
	I0906 15:10:11.844564   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.339709   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:12.339722   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.339738   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.339744   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.342186   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:12.342196   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.342202   28549 round_trippers.go:580]     Audit-Id: e69feef5-fb8e-488b-bc50-73763c330c65
	I0906 15:10:12.342206   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.342212   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.342216   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.342221   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.342225   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.342302   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:12.342597   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:12.342604   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.342610   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.342616   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.344572   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:12.344581   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.344589   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.344594   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.344599   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.344604   28549 round_trippers.go:580]     Audit-Id: b9ae28aa-2cc6-44be-b4b8-b518adbc6134
	I0906 15:10:12.344608   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.344613   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.344653   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.838999   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:12.839031   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.839065   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.839076   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.841956   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:12.841969   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.841977   28549 round_trippers.go:580]     Audit-Id: 26aded15-6640-4241-96d3-fab002e7a9c4
	I0906 15:10:12.841982   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.841987   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.841995   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.842002   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.842008   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.842233   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:12.842513   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:12.842518   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:12.842524   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:12.842530   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:12.844525   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:12.844545   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:12.844557   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:12.844565   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:12 GMT
	I0906 15:10:12.844572   28549 round_trippers.go:580]     Audit-Id: 2860ecd9-cd3f-4e76-880c-341e070be1f2
	I0906 15:10:12.844577   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:12.844583   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:12.844589   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:12.844641   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:12.844825   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:13.339052   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:13.339068   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.339076   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.339084   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.341934   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:13.341947   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.341953   28549 round_trippers.go:580]     Audit-Id: 0214da62-133b-4bba-94b8-4428258da43a
	I0906 15:10:13.341962   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.341970   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.341975   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.341980   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.341984   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.342052   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:13.342332   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:13.342338   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.342344   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.342372   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.344199   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:13.344207   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.344212   28549 round_trippers.go:580]     Audit-Id: 60530d9e-23f2-40a2-b69b-af3f89ce4bcd
	I0906 15:10:13.344217   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.344222   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.344226   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.344231   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.344236   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.344381   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:13.839036   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:13.839047   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.839053   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.839058   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.841203   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:13.841213   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.841219   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.841223   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.841227   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.841232   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.841237   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.841243   28549 round_trippers.go:580]     Audit-Id: de8fea88-770f-4291-bde9-e5853b543fbe
	I0906 15:10:13.841522   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:13.841816   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:13.841823   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:13.841829   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:13.841834   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:13.843383   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:13.843391   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:13.843396   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:13.843402   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:13.843410   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:13 GMT
	I0906 15:10:13.843417   28549 round_trippers.go:580]     Audit-Id: ca94da72-2562-4812-b318-d17e3f58648f
	I0906 15:10:13.843423   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:13.843428   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:13.843605   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:14.339157   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:14.339190   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.339202   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.339210   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.342031   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:14.342043   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.342049   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.342054   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.342060   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.342064   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.342069   28549 round_trippers.go:580]     Audit-Id: 68bf7fbb-9fd4-4fd2-ac69-afb9bccba288
	I0906 15:10:14.342073   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.342144   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:14.342430   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:14.342437   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.342443   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.342449   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.344360   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:14.344370   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.344375   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.344380   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.344385   28549 round_trippers.go:580]     Audit-Id: f2bca961-04be-4d0a-8d62-5e7242bd708e
	I0906 15:10:14.344390   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.344395   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.344399   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.344444   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:14.839198   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:14.839209   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.839215   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.839221   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.841822   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:14.841833   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.841838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.841844   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.841848   28549 round_trippers.go:580]     Audit-Id: 7eccb69d-94e3-4c88-b851-cce67b037c8c
	I0906 15:10:14.841853   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.841858   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.841862   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.841935   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:14.842207   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:14.842213   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:14.842219   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:14.842228   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:14.843971   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:14.843983   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:14.843988   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:14.843993   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:14.844005   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:14.844020   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:14 GMT
	I0906 15:10:14.844033   28549 round_trippers.go:580]     Audit-Id: ed7eb447-372a-4b3a-a2bd-4882d89bfcef
	I0906 15:10:14.844040   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:14.844234   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:15.340006   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:15.340029   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.340042   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.340051   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.343290   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:15.343301   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.343307   28549 round_trippers.go:580]     Audit-Id: 9d1626b4-9e32-4742-a67a-1a12e7aab82f
	I0906 15:10:15.343318   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.343324   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.343328   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.343336   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.343341   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.343535   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:15.343818   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:15.343824   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.343830   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.343835   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.345521   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:15.345531   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.345538   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.345546   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.345552   28549 round_trippers.go:580]     Audit-Id: 5131f6a6-74bd-4c56-9fbc-a7d8fa11a24c
	I0906 15:10:15.345557   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.345563   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.345567   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.345885   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:15.346062   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:15.839595   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:15.839615   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.839626   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.839636   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.843534   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:15.843546   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.843552   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.843556   28549 round_trippers.go:580]     Audit-Id: ad811e0b-209a-4183-93e0-13bd82297ca1
	I0906 15:10:15.843561   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.843566   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.843570   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.843575   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.843676   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:15.843953   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:15.843959   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:15.843965   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:15.843969   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:15.845852   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:15.845861   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:15.845867   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:15.845874   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:15.845879   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:15.845884   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:15.845889   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:15 GMT
	I0906 15:10:15.845894   28549 round_trippers.go:580]     Audit-Id: fb9b9b5f-6260-4f7a-b44c-df9dd7204a64
	I0906 15:10:15.845937   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:16.339143   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:16.339157   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.339165   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.339172   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.342260   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:16.342270   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.342275   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.342280   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.342285   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.342289   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.342294   28549 round_trippers.go:580]     Audit-Id: 6ebe6407-718a-4a80-95e0-2291dca56ad7
	I0906 15:10:16.342299   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.342392   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:16.342667   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:16.342672   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.342678   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.342683   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.344495   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:16.344504   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.344509   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.344514   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.344519   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.344523   28549 round_trippers.go:580]     Audit-Id: 3018b8d5-10c9-4784-90ac-9220fa47e525
	I0906 15:10:16.344528   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.344533   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.344574   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:16.839076   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:16.839099   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.839134   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.839147   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.841957   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:16.841970   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.841975   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.841980   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.841985   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.841989   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.841993   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.841998   28549 round_trippers.go:580]     Audit-Id: b5f6f0a9-aeb6-4ac7-81d1-62ebe1f363ab
	I0906 15:10:16.842061   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:16.842338   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:16.842344   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:16.842350   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:16.842354   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:16.843917   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:16.843925   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:16.843931   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:16 GMT
	I0906 15:10:16.843935   28549 round_trippers.go:580]     Audit-Id: d98c9752-7b29-443d-b789-81df92ad4623
	I0906 15:10:16.843940   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:16.843945   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:16.843950   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:16.843955   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:16.844490   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.339024   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:17.339042   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.339050   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.339057   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.341525   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:17.341534   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.341540   28549 round_trippers.go:580]     Audit-Id: 3a74029d-207f-4e86-b98b-6cc0acffebb6
	I0906 15:10:17.341544   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.341549   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.341554   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.341558   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.341563   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.341624   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:17.341908   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:17.341914   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.341920   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.341925   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.343803   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:17.343811   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.343816   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.343821   28549 round_trippers.go:580]     Audit-Id: 51c7bd0e-9df3-4c26-afed-f8b7f7259c26
	I0906 15:10:17.343827   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.343832   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.343836   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.343841   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.343886   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.839304   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:17.839321   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.839333   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.839352   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.841871   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:17.841880   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.841886   28549 round_trippers.go:580]     Audit-Id: 3a31913c-f101-40e0-88e0-432208120eb0
	I0906 15:10:17.841890   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.841895   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.841899   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.841904   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.841914   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.842249   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:17.842523   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:17.842529   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:17.842535   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:17.842540   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:17.844383   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:17.844392   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:17.844399   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:17.844408   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:17.844413   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:17.844421   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:17.844433   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:17 GMT
	I0906 15:10:17.844444   28549 round_trippers.go:580]     Audit-Id: 12cdb2a4-6e55-41b9-9fc5-9855cb60d052
	I0906 15:10:17.844689   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:17.844875   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:18.340600   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:18.340660   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.340675   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.340689   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.344790   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:18.344806   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.344814   28549 round_trippers.go:580]     Audit-Id: 48f99909-2ae5-4c09-b87f-a6490253d814
	I0906 15:10:18.344820   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.344826   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.344832   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.344838   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.344845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.344942   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:18.345221   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:18.345227   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.345233   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.345238   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.347131   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:18.347140   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.347147   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.347153   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.347158   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.347164   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.347169   28549 round_trippers.go:580]     Audit-Id: f7b23df2-5bc7-4ece-bf79-cbd3c49381c2
	I0906 15:10:18.347174   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.347220   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:18.841160   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:18.841182   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.841194   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.841204   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.844525   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:18.844537   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.844544   28549 round_trippers.go:580]     Audit-Id: 00ed7db5-557f-4715-a0f8-53dd9a71372e
	I0906 15:10:18.844549   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.844553   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.844559   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.844563   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.844568   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.844649   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:18.844930   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:18.844936   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:18.844942   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:18.844947   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:18.846853   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:18.846864   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:18.846871   28549 round_trippers.go:580]     Audit-Id: fe82df8b-3aa8-4287-92bc-21a241aaf673
	I0906 15:10:18.846876   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:18.846881   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:18.846900   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:18.846907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:18.846912   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:18 GMT
	I0906 15:10:18.846967   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.339361   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:19.339384   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.339396   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.339429   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.343234   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:19.343247   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.343254   28549 round_trippers.go:580]     Audit-Id: b48e0157-4f07-4d32-aa98-a2b5f0ff3870
	I0906 15:10:19.343261   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.343267   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.343273   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.343279   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.343287   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.343404   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:19.343772   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:19.343780   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.343788   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.343795   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.345564   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:19.345573   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.345579   28549 round_trippers.go:580]     Audit-Id: 8e53a287-5c53-4922-a4f5-c1b0747e8b36
	I0906 15:10:19.345584   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.345588   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.345593   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.345598   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.345603   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.345643   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.839470   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:19.839500   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.839510   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.839517   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.842592   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:19.842601   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.842607   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.842612   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.842616   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.842621   28549 round_trippers.go:580]     Audit-Id: 427b2e47-3bfa-46ab-8f29-328a589ca153
	I0906 15:10:19.842626   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.842630   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.842699   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:19.842982   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:19.842989   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:19.842995   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:19.843000   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:19.844999   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:19.845010   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:19.845016   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:19.845022   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:19 GMT
	I0906 15:10:19.845027   28549 round_trippers.go:580]     Audit-Id: ae8ee28f-68b0-406b-aa9e-e18e376b7ebf
	I0906 15:10:19.845031   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:19.845036   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:19.845040   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:19.845230   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:19.845431   28549 pod_ready.go:102] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:20.339816   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:20.339835   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.339847   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.339856   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.343462   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:20.343472   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.343478   28549 round_trippers.go:580]     Audit-Id: 985b7b29-14ac-46f2-af25-3609509f3f7f
	I0906 15:10:20.343483   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.343488   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.343492   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.343497   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.343502   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.343572   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:20.343852   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:20.343858   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.343864   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.343869   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.345787   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:20.345796   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.345802   28549 round_trippers.go:580]     Audit-Id: 22db0a5e-d906-4d0d-a867-6b0700dee4c5
	I0906 15:10:20.345809   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.345816   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.345821   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.345826   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.345831   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.346081   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:20.839990   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:20.840006   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.840014   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.840022   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.843155   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:20.843171   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.843180   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.843189   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.843195   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.843202   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.843209   28549 round_trippers.go:580]     Audit-Id: 774a07a2-7d0e-4d41-ad81-b802e2db28f9
	I0906 15:10:20.843213   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.843286   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"711","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8715 chars]
	I0906 15:10:20.843564   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:20.843570   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:20.843576   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:20.843581   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:20.845482   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:20.845491   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:20.845497   28549 round_trippers.go:580]     Audit-Id: a7996f4b-dbd8-4368-b8f9-85111b96fbfb
	I0906 15:10:20.845501   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:20.845507   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:20.845512   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:20.845517   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:20.845521   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:20 GMT
	I0906 15:10:20.845559   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.340997   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:21.341036   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.341096   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.341108   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.344338   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.344350   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.344356   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.344363   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.344374   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.344383   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.344387   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.344392   28549 round_trippers.go:580]     Audit-Id: 17b20b2a-8af5-4b3d-a0df-3b022604aad0
	I0906 15:10:21.344471   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"793","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8471 chars]
	I0906 15:10:21.344744   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.344749   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.344755   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.344760   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.346614   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.346623   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.346629   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.346634   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.346642   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.346649   28549 round_trippers.go:580]     Audit-Id: fe5de989-dc99-4f86-aca0-e012b3e57093
	I0906 15:10:21.346656   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.346663   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.346721   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.346899   28549 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.346910   28549 pod_ready.go:81] duration metric: took 13.012599203s waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.346918   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.346944   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:10:21.346949   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.346955   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.346961   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.348873   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.348882   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.348887   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.348891   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.348896   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.348901   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.348906   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.348910   28549 round_trippers.go:580]     Audit-Id: e227e4e0-76e6-4bf2-a5b6-97b3998865f5
	I0906 15:10:21.348961   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"768","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 8044 chars]
	I0906 15:10:21.349207   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.349213   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.349218   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.349229   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.351026   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.351035   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.351040   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.351046   28549 round_trippers.go:580]     Audit-Id: cd3efd18-23e4-4ff6-bb69-a9619ca15d65
	I0906 15:10:21.351056   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.351061   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.351066   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.351071   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.351111   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.351282   28549 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.351288   28549 pod_ready.go:81] duration metric: took 4.364684ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.351293   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.351317   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:10:21.351321   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.351327   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.351332   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.352854   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.352864   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.352869   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.352875   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.352879   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.352885   28549 round_trippers.go:580]     Audit-Id: 790ccaef-5484-4d6c-82b2-3e5f02145fc2
	I0906 15:10:21.352890   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.352894   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.352934   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"672","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5772 chars]
	I0906 15:10:21.353161   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:10:21.353166   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.353172   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.353177   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.354679   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.354687   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.354692   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.354697   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.354701   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.354706   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.354710   28549 round_trippers.go:580]     Audit-Id: c009e443-4fee-4d7e-9efb-94d7a83314ea
	I0906 15:10:21.354716   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.355011   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m03","uid":"268cefad-05d1-4e4b-b44e-2d8678e78e39","resourceVersion":"685","creationTimestamp":"2022-09-06T22:09:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:09:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostnam
e":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Upd [truncated 4408 chars]
	I0906 15:10:21.355171   28549 pod_ready.go:92] pod "kube-proxy-czbjx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.355181   28549 pod_ready.go:81] duration metric: took 3.883796ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.355187   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.355211   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:21.355215   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.355221   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.355226   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.356811   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.356819   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.356824   28549 round_trippers.go:580]     Audit-Id: d2f93656-abfa-4313-aee3-082467b35dcb
	I0906 15:10:21.356828   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.356834   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.356839   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.356843   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.356848   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.357208   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"749","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5762 chars]
	I0906 15:10:21.357428   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.357434   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.357441   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.357446   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.359175   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.359183   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.359188   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.359194   28549 round_trippers.go:580]     Audit-Id: 1b840522-c0a6-45df-ae6e-018e1ea1fbc6
	I0906 15:10:21.359200   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.359205   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.359219   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.359229   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.359273   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.359473   28549 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.359478   28549 pod_ready.go:81] duration metric: took 4.287168ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.359484   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.359508   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:21.359512   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.359518   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.359523   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.361190   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.361201   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.361208   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.361214   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.361220   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.361227   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.361233   28549 round_trippers.go:580]     Audit-Id: 54fb5d05-38bf-494b-943a-712cf0a16b99
	I0906 15:10:21.361239   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.361340   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"476","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5565 chars]
	I0906 15:10:21.361562   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:21.361568   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.361574   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.361579   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.363237   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:21.363247   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.363252   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.363257   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.363263   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.363268   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.363274   28549 round_trippers.go:580]     Audit-Id: 69ab58ef-d69f-4d8b-87c2-2737433c22fd
	I0906 15:10:21.363279   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.363326   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05","resourceVersion":"602","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4591 chars]
	I0906 15:10:21.363477   28549 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.363485   28549 pod_ready.go:81] duration metric: took 3.996705ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.363490   28549 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.542259   28549 request.go:533] Waited for 178.688593ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:21.542317   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:21.542325   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.542334   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.542343   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.545474   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.545487   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.545492   28549 round_trippers.go:580]     Audit-Id: 40b070f3-38fc-4d9d-8df0-c3e1bcf5608d
	I0906 15:10:21.545498   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.545503   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.545508   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.545514   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.545518   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.545563   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"780","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4928 chars]
	I0906 15:10:21.741068   28549 request.go:533] Waited for 195.255208ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.741118   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.741126   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.741138   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.741153   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.744098   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.744110   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.744116   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.744120   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.744124   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.744129   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.744134   28549 round_trippers.go:580]     Audit-Id: c2a2df9f-e749-45ea-ae89-8bb1c4f22f95
	I0906 15:10:21.744139   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.744185   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.744377   28549 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:21.744383   28549 pod_ready.go:81] duration metric: took 380.882419ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:21.744390   28549 pod_ready.go:38] duration metric: took 13.426212653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:21.744403   28549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:10:21.752010   28549 command_runner.go:130] > -16
	I0906 15:10:21.752125   28549 ops.go:34] apiserver oom_adj: -16
	I0906 15:10:21.752133   28549 kubeadm.go:631] restartCluster took 24.642618508s
	I0906 15:10:21.752142   28549 kubeadm.go:398] StartCluster complete in 24.678973392s
	I0906 15:10:21.752158   28549 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:10:21.752237   28549 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.752629   28549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:10:21.753292   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.753465   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:21.753649   28549 round_trippers.go:463] GET https://127.0.0.1:57200/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 15:10:21.753655   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.753661   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.753667   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.755996   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.756005   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.756010   28549 round_trippers.go:580]     Audit-Id: bf9df806-cc5a-4084-a4a3-2786162f021a
	I0906 15:10:21.756017   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.756022   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.756027   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.756032   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.756037   28549 round_trippers.go:580]     Content-Length: 291
	I0906 15:10:21.756042   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.756052   28549 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a49f3069-8a92-4785-ab5f-7ea0a1721073","resourceVersion":"789","creationTimestamp":"2022-09-06T22:06:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0906 15:10:21.756138   28549 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220906150606-22187" rescaled to 1
	I0906 15:10:21.756169   28549 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:10:21.756175   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:10:21.756199   28549 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0906 15:10:21.756313   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:21.777725   28549 out.go:177] * Verifying Kubernetes components...
	I0906 15:10:21.777787   28549 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220906150606-22187"
	I0906 15:10:21.777810   28549 addons.go:65] Setting default-storageclass=true in profile "multinode-20220906150606-22187"
	I0906 15:10:21.777814   28549 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220906150606-22187"
	W0906 15:10:21.826051   28549 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:10:21.826050   28549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220906150606-22187"
	I0906 15:10:21.810787   28549 command_runner.go:130] > apiVersion: v1
	I0906 15:10:21.826092   28549 command_runner.go:130] > data:
	I0906 15:10:21.826065   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:10:21.826106   28549 command_runner.go:130] >   Corefile: |
	I0906 15:10:21.826120   28549 command_runner.go:130] >     .:53 {
	I0906 15:10:21.826127   28549 command_runner.go:130] >         errors
	I0906 15:10:21.826135   28549 command_runner.go:130] >         health {
	I0906 15:10:21.826143   28549 command_runner.go:130] >            lameduck 5s
	I0906 15:10:21.826149   28549 command_runner.go:130] >         }
	I0906 15:10:21.826156   28549 command_runner.go:130] >         ready
	I0906 15:10:21.826177   28549 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0906 15:10:21.826189   28549 command_runner.go:130] >            pods insecure
	I0906 15:10:21.826197   28549 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0906 15:10:21.826206   28549 command_runner.go:130] >            ttl 30
	I0906 15:10:21.826212   28549 command_runner.go:130] >         }
	I0906 15:10:21.826220   28549 command_runner.go:130] >         prometheus :9153
	I0906 15:10:21.826229   28549 command_runner.go:130] >         hosts {
	I0906 15:10:21.826187   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:21.826236   28549 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0906 15:10:21.826246   28549 command_runner.go:130] >            fallthrough
	I0906 15:10:21.826252   28549 command_runner.go:130] >         }
	I0906 15:10:21.826259   28549 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0906 15:10:21.826267   28549 command_runner.go:130] >            max_concurrent 1000
	I0906 15:10:21.826275   28549 command_runner.go:130] >         }
	I0906 15:10:21.826284   28549 command_runner.go:130] >         cache 30
	I0906 15:10:21.826293   28549 command_runner.go:130] >         loop
	I0906 15:10:21.826322   28549 command_runner.go:130] >         reload
	I0906 15:10:21.826328   28549 command_runner.go:130] >         loadbalance
	I0906 15:10:21.826334   28549 command_runner.go:130] >     }
	I0906 15:10:21.826339   28549 command_runner.go:130] > kind: ConfigMap
	I0906 15:10:21.826343   28549 command_runner.go:130] > metadata:
	I0906 15:10:21.826349   28549 command_runner.go:130] >   creationTimestamp: "2022-09-06T22:06:35Z"
	I0906 15:10:21.826353   28549 command_runner.go:130] >   name: coredns
	I0906 15:10:21.826358   28549 command_runner.go:130] >   namespace: kube-system
	I0906 15:10:21.826363   28549 command_runner.go:130] >   resourceVersion: "371"
	I0906 15:10:21.826370   28549 command_runner.go:130] >   uid: 99586de8-1370-4877-aa2d-6bd1c7354337
	I0906 15:10:21.826430   28549 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:10:21.826561   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.827298   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.837326   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:21.897455   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:21.923245   28549 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:10:21.923717   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:21.960625   28549 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:10:21.960647   28549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:10:21.960783   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:21.961038   28549 round_trippers.go:463] GET https://127.0.0.1:57200/apis/storage.k8s.io/v1/storageclasses
	I0906 15:10:21.961056   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.961945   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.962094   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.965829   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:21.965858   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.965866   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.965872   28549 round_trippers.go:580]     Content-Length: 1273
	I0906 15:10:21.965877   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.965887   28549 round_trippers.go:580]     Audit-Id: 404bccf5-0825-4fe3-ab9f-6998c764af66
	I0906 15:10:21.965893   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.965900   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.965908   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.966646   28549 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0906 15:10:21.967748   28549 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:10:21.967792   28549 round_trippers.go:463] PUT https://127.0.0.1:57200/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 15:10:21.967797   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.967803   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.967809   28549 round_trippers.go:473]     Content-Type: application/json
	I0906 15:10:21.967814   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.971606   28549 node_ready.go:35] waiting up to 6m0s for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:10:21.971676   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:21.971680   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:21.971686   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:21.971692   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:21.973022   28549 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:10:21.973053   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.973064   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.973070   28549 round_trippers.go:580]     Content-Length: 1220
	I0906 15:10:21.973074   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.973084   28549 round_trippers.go:580]     Audit-Id: 6db5d999-9ff1-4d21-aac7-bfc89c0eea42
	I0906 15:10:21.973090   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.973096   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.973102   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.973124   28549 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:10:21.973207   28549 addons.go:153] Setting addon default-storageclass=true in "multinode-20220906150606-22187"
	W0906 15:10:21.973214   28549 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:10:21.973232   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:21.973565   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:10:21.974650   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:21.974695   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:21.974702   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:21.974707   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:21.974714   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:21.974719   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:21 GMT
	I0906 15:10:21.974724   28549 round_trippers.go:580]     Audit-Id: ef165d66-403e-44a6-a74f-1f7c681d97bc
	I0906 15:10:21.974728   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:21.975644   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:21.975938   28549 node_ready.go:49] node "multinode-20220906150606-22187" has status "Ready":"True"
	I0906 15:10:21.975947   28549 node_ready.go:38] duration metric: took 4.323366ms waiting for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:10:21.975956   28549 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:22.030027   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:22.037969   28549 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:10:22.037982   28549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:10:22.038050   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:22.102350   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:22.120019   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:10:22.141023   28549 request.go:533] Waited for 165.018627ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:22.141050   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:22.141055   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.141062   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.141067   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.144654   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:22.144666   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.144672   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.144677   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.144690   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.144698   28549 round_trippers.go:580]     Audit-Id: a534ebf2-8dbd-490d-b160-c174b4e6a83d
	I0906 15:10:22.144704   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.144711   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.146486   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85156 chars]
	I0906 15:10:22.148977   28549 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:22.191688   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:10:22.314182   28549 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0906 15:10:22.315754   28549 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0906 15:10:22.318148   28549 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:10:22.320455   28549 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:10:22.322302   28549 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0906 15:10:22.329110   28549 command_runner.go:130] > pod/storage-provisioner configured
	I0906 15:10:22.341368   28549 request.go:533] Waited for 192.339579ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:22.341395   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:22.341400   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.341406   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.341412   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.343969   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:22.343980   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.343985   28549 round_trippers.go:580]     Audit-Id: 8f71dfd8-fa7f-4006-8ad1-c3455d457af4
	I0906 15:10:22.343990   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.343996   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.344007   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.344013   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.344017   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.344084   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:22.373003   28549 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0906 15:10:22.399785   28549 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:10:22.441547   28549 addons.go:414] enableAddons completed in 685.350332ms
	I0906 15:10:22.541331   28549 request.go:533] Waited for 196.900226ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:22.541371   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:22.541378   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:22.541389   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:22.541401   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:22.545111   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:22.545121   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:22.545128   28549 round_trippers.go:580]     Audit-Id: 409f4ed1-e89b-402a-91ea-7f4175686da5
	I0906 15:10:22.545135   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:22.545141   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:22.545146   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:22.545151   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:22.545156   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:22 GMT
	I0906 15:10:22.545209   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:23.047744   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:23.047757   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.047766   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.047773   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.050885   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:23.050896   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.050901   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.050906   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.050911   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.050915   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.050921   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.050925   28549 round_trippers.go:580]     Audit-Id: b3ff731a-ff9c-4c69-847d-1b5a9b396a65
	I0906 15:10:23.051006   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:23.051315   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:23.051320   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.051326   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.051332   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.053875   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:23.053884   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.053889   28549 round_trippers.go:580]     Audit-Id: 7ec5765f-3aca-45e1-8c29-7d2dc96c5a7a
	I0906 15:10:23.053897   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.053902   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.053907   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.053912   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.053916   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.053970   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:23.547705   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:23.547725   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.547737   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.547747   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.551309   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:23.551321   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.551326   28549 round_trippers.go:580]     Audit-Id: c64181ec-2202-499b-9929-b74eb04826c6
	I0906 15:10:23.551331   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.551335   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.551340   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.551345   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.551350   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.551418   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:23.551715   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:23.551721   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:23.551728   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:23.551732   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:23.553846   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:23.553863   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:23.553874   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:23.553880   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:23.553886   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:23.553896   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:23 GMT
	I0906 15:10:23.553901   28549 round_trippers.go:580]     Audit-Id: 18847763-bf51-42ab-8e69-5c6ef01ab3d2
	I0906 15:10:23.553907   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:23.554166   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.047595   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:24.047620   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.047632   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.047644   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.051205   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:24.051217   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.051229   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.051235   28549 round_trippers.go:580]     Audit-Id: ceda892f-cbec-465e-aa16-b7e6f1fe9680
	I0906 15:10:24.051239   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.051244   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.051250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.051255   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.051325   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:24.051619   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:24.051624   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.051630   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.051635   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.053360   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:24.053369   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.053374   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.053379   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.053385   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.053391   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.053398   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.053407   28549 round_trippers.go:580]     Audit-Id: 39df1b93-1028-45d1-9341-18c04de35913
	I0906 15:10:24.053451   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.547675   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:24.547699   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.547710   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.547719   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.551142   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:24.551154   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.551160   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.551164   28549 round_trippers.go:580]     Audit-Id: ce7cc79b-f327-4b32-a96f-42e36a612f80
	I0906 15:10:24.551170   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.551176   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.551185   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.551191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.551260   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:24.551545   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:24.551551   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:24.551557   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:24.551565   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:24.553257   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:24.553267   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:24.553272   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:24.553277   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:24.553281   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:24.553287   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:24.553291   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:24 GMT
	I0906 15:10:24.553296   28549 round_trippers.go:580]     Audit-Id: f70e2d17-81c4-47d5-abd5-e12168f90656
	I0906 15:10:24.553342   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:24.553525   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:25.045767   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:25.045788   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.045801   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.045812   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.049305   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:25.049317   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.049323   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.049328   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.049333   28549 round_trippers.go:580]     Audit-Id: f02d307a-4829-43e6-86fb-ebff9064d8ce
	I0906 15:10:25.049338   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.049343   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.049347   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.049492   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:25.049797   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:25.049803   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.049809   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.049814   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.051825   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.051835   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.051840   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.051845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.051850   28549 round_trippers.go:580]     Audit-Id: 817a5bb4-470a-43b0-a07c-9c386d714dad
	I0906 15:10:25.051856   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.051862   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.051866   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.051981   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:25.547508   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:25.547520   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.547526   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.547531   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.550038   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.550059   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.550066   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.550071   28549 round_trippers.go:580]     Audit-Id: d9c75459-d02a-4a31-bd3e-ca2d2df40f69
	I0906 15:10:25.550076   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.550081   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.550086   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.550090   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.550151   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:25.550434   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:25.550441   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:25.550447   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:25.550463   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:25.552756   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:25.552768   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:25.552774   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:25.552779   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:25.552784   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:25.552789   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:25.552793   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:25 GMT
	I0906 15:10:25.552798   28549 round_trippers.go:580]     Audit-Id: f06b59d7-7628-430f-b828-f162baf7f454
	I0906 15:10:25.552918   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.045817   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:26.045836   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.045844   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.045851   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.049198   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:26.049212   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.049217   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.049234   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.049242   28549 round_trippers.go:580]     Audit-Id: d640682b-c1fb-4b44-a564-1af47273b749
	I0906 15:10:26.049247   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.049257   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.049262   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.049322   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:26.049612   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:26.049618   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.049624   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.049629   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.051405   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:26.051415   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.051421   28549 round_trippers.go:580]     Audit-Id: 1b9b50cb-af42-403e-8075-da1b906a9a82
	I0906 15:10:26.051425   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.051429   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.051434   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.051440   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.051444   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.051670   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.547675   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:26.547707   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.547721   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.547732   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.553980   28549 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:10:26.553993   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.553999   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.554004   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.554008   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.554013   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.554018   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.554022   28549 round_trippers.go:580]     Audit-Id: 0a650b8f-fadb-400d-9444-948e9d96fb33
	I0906 15:10:26.554090   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:26.554393   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:26.554399   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:26.554410   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:26.554416   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:26.556213   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:26.556222   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:26.556228   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:26 GMT
	I0906 15:10:26.556236   28549 round_trippers.go:580]     Audit-Id: 8a52147c-3b7d-42b6-8249-7ca52167e7d2
	I0906 15:10:26.556241   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:26.556245   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:26.556250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:26.556254   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:26.556304   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:26.556484   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:27.047629   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:27.047653   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.047665   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.047675   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.051099   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:27.051112   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.051120   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.051125   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.051139   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.051150   28549 round_trippers.go:580]     Audit-Id: 1fdc61ac-9ce4-44b2-aee4-ebff17d0b5ea
	I0906 15:10:27.051157   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.051164   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.051364   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:27.051655   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:27.051661   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.051668   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.051674   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.053609   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:27.053618   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.053624   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.053630   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.053637   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.053644   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.053649   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.053654   28549 round_trippers.go:580]     Audit-Id: 9883a2fb-a208-48c4-9be0-9feb9e4757d1
	I0906 15:10:27.053729   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:27.546606   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:27.546631   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.546642   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.546652   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.550034   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:27.550046   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.550052   28549 round_trippers.go:580]     Audit-Id: 86834fcd-92cd-4477-995e-e0275f298ff0
	I0906 15:10:27.550057   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.550061   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.550066   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.550083   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.550093   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.550176   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:27.550471   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:27.550477   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:27.550483   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:27.550488   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:27.552295   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:27.552303   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:27.552308   28549 round_trippers.go:580]     Audit-Id: 2a5da242-f201-43e7-941f-80560f4a8531
	I0906 15:10:27.552313   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:27.552318   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:27.552322   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:27.552327   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:27.552332   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:27 GMT
	I0906 15:10:27.552375   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:28.047524   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:28.047537   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.047543   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.047548   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.050113   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.050123   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.050128   28549 round_trippers.go:580]     Audit-Id: a5548866-3ce1-4641-a000-02cafe90c523
	I0906 15:10:28.050133   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.050140   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.050145   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.050160   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.050167   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.050238   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:28.050537   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:28.050543   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.050549   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.050555   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.052670   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.052682   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.052688   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.052695   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.052700   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.052705   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.052710   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.052714   28549 round_trippers.go:580]     Audit-Id: 5e36ad67-1ba1-4349-a16a-2aa45c39189c
	I0906 15:10:28.052765   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:28.547571   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:28.547592   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.547605   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.547614   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.550674   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:28.550687   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.550692   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.550697   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.550701   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.550705   28549 round_trippers.go:580]     Audit-Id: 76cc177d-8ce1-4351-bbc5-c5f029c98947
	I0906 15:10:28.550709   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.550713   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.550777   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:28.551070   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:28.551076   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:28.551082   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:28.551087   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:28.553448   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:28.553457   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:28.553462   28549 round_trippers.go:580]     Audit-Id: 9f22805b-8c49-43ab-86ef-f8906f39fe75
	I0906 15:10:28.553467   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:28.553472   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:28.553477   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:28.553482   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:28.553486   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:28 GMT
	I0906 15:10:28.553539   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:29.045724   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:29.045741   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.045750   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.045757   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.048810   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:29.048822   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.048827   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.048833   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.048845   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.048861   28549 round_trippers.go:580]     Audit-Id: 671cc099-c836-4753-970f-44af3300d499
	I0906 15:10:29.048871   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.048882   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.048954   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:29.049236   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:29.049242   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.049247   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.049254   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.050902   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:29.050911   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.050918   28549 round_trippers.go:580]     Audit-Id: 534ff1c7-b275-49ba-ac46-a7f89a06c446
	I0906 15:10:29.050925   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.050930   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.050935   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.050939   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.050944   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.051105   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:29.051290   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:29.546458   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:29.546477   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.546489   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.546499   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.550275   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:29.550292   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.550300   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.550307   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.550312   28549 round_trippers.go:580]     Audit-Id: f205fcec-7d4f-4d4c-b4b8-4de433a6f237
	I0906 15:10:29.550319   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.550325   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.550371   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.550553   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:29.550840   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:29.550846   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:29.550852   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:29.550857   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:29.552963   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:29.552972   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:29.552978   28549 round_trippers.go:580]     Audit-Id: 7dbf2f45-d4ad-4cda-9601-593021a5e75b
	I0906 15:10:29.552983   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:29.552988   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:29.552992   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:29.552999   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:29.553004   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:29 GMT
	I0906 15:10:29.553051   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:30.047306   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:30.047325   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.047334   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.047341   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.050441   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:30.050453   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.050458   28549 round_trippers.go:580]     Audit-Id: 3875b2e7-3248-49ed-8e9c-c3b38ad3dcb6
	I0906 15:10:30.050463   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.050467   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.050471   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.050476   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.050481   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.050543   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:30.050834   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:30.050839   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.050845   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.050850   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.052749   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:30.052757   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.052762   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.052766   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.052772   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.052776   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.052780   28549 round_trippers.go:580]     Audit-Id: 770c938f-79c9-4705-a671-36b7d435a6d8
	I0906 15:10:30.052785   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.052828   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:30.547549   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:30.547560   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.547567   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.547573   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.550151   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:30.550161   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.550168   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.550174   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.550180   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.550185   28549 round_trippers.go:580]     Audit-Id: 664f2b14-e009-4571-820a-086420394757
	I0906 15:10:30.550190   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.550195   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.550251   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:30.550527   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:30.550534   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:30.550540   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:30.550546   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:30.552422   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:30.552431   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:30.552436   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:30.552441   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:30.552446   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:30 GMT
	I0906 15:10:30.552450   28549 round_trippers.go:580]     Audit-Id: 5ad32fb8-2a5b-42a9-91bc-5d2faa5944c6
	I0906 15:10:30.552456   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:30.552460   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:30.552524   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:31.046220   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:31.046251   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.046263   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.046272   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.048835   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:31.048845   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.048852   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.048857   28549 round_trippers.go:580]     Audit-Id: 467c7b64-393c-49ab-b2c8-6306470b8bb5
	I0906 15:10:31.048863   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.048867   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.048874   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.048879   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.048941   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:31.049234   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:31.049240   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.049246   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.049251   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.051143   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:31.051153   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.051159   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.051164   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.051168   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.051173   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.051177   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.051182   28549 round_trippers.go:580]     Audit-Id: 79554f34-2af1-4936-9e6b-20db7ee159e1
	I0906 15:10:31.051604   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:31.051790   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:31.547574   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:31.547593   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.547601   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.547608   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.550721   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:31.550735   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.550740   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.550745   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.550753   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.550758   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.550765   28549 round_trippers.go:580]     Audit-Id: 75ee567f-f7cf-412d-9f0b-3e5b78432f4f
	I0906 15:10:31.550770   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.550829   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:31.551119   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:31.551125   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:31.551131   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:31.551137   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:31.552911   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:31.552920   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:31.552925   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:31.552930   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:31.552935   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:31.552940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:31.552945   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:31 GMT
	I0906 15:10:31.552949   28549 round_trippers.go:580]     Audit-Id: 4db668bf-eb8d-4c7a-985e-f281273e273f
	I0906 15:10:31.552993   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:32.046555   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:32.046570   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.046578   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.046585   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.049435   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:32.049445   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.049451   28549 round_trippers.go:580]     Audit-Id: a325082a-9555-4433-8d69-4b4e47d01200
	I0906 15:10:32.049456   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.049461   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.049465   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.049472   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.049477   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.049758   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:32.050051   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:32.050057   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.050063   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.050068   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.051928   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:32.051936   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.051943   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.051948   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.051953   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.051958   28549 round_trippers.go:580]     Audit-Id: ea01f728-d55d-4c3d-ae85-812fc7eda3c8
	I0906 15:10:32.051966   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.051993   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.052374   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:32.546640   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:32.546655   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.546664   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.546672   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.549907   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:32.549922   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.549929   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.549940   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.549947   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.549954   28549 round_trippers.go:580]     Audit-Id: 53e5e02f-84f7-4150-9e5b-3df0b9d7800d
	I0906 15:10:32.549958   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.549963   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.550059   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:32.550415   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:32.550421   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:32.550426   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:32.550432   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:32.552338   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:32.552347   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:32.552352   28549 round_trippers.go:580]     Audit-Id: 50334f53-ef2a-40fc-805c-07f3edf00919
	I0906 15:10:32.552357   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:32.552362   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:32.552366   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:32.552371   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:32.552376   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:32 GMT
	I0906 15:10:32.552426   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.045575   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:33.045595   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.045619   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.045649   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.049069   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:33.049087   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.049093   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.049097   28549 round_trippers.go:580]     Audit-Id: ea530799-0317-4ceb-b493-50ecb637a3db
	I0906 15:10:33.049102   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.049106   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.049111   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.049116   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.049174   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:33.049468   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:33.049474   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.049479   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.049484   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.051171   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:33.051185   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.051191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.051197   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.051203   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.051210   28549 round_trippers.go:580]     Audit-Id: f954eac1-cdb1-4d9d-8da1-3c9775bc8a8b
	I0906 15:10:33.051220   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.051227   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.051416   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.545976   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:33.545992   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.546002   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.546009   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.549335   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:33.549345   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.549350   28549 round_trippers.go:580]     Audit-Id: 9101f0e7-d06a-4240-a771-85d25578713b
	I0906 15:10:33.549355   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.549361   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.549367   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.549374   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.549381   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.549542   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:33.549843   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:33.549849   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:33.549857   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:33.549862   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:33.551791   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:33.551801   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:33.551806   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:33.551810   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:33.551815   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:33 GMT
	I0906 15:10:33.551819   28549 round_trippers.go:580]     Audit-Id: 9f76bf75-c884-4c90-9aa5-85ebacfb4245
	I0906 15:10:33.551824   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:33.551828   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:33.551891   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:33.552082   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:34.045997   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:34.046012   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.046023   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.046031   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.049256   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:34.049269   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.049274   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.049280   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.049285   28549 round_trippers.go:580]     Audit-Id: d53b5e7e-c6f7-43ab-88cf-d3bf7706ce8a
	I0906 15:10:34.049293   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.049299   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.049303   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.049368   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:34.049667   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:34.049672   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.049679   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.049684   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.051448   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:34.051457   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.051465   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.051472   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.051478   28549 round_trippers.go:580]     Audit-Id: e267c844-e59c-40e9-bebe-a92c1e074ba4
	I0906 15:10:34.051486   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.051492   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.051499   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.051696   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:34.546505   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:34.546521   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.546532   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.546539   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.550033   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:34.550045   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.550050   28549 round_trippers.go:580]     Audit-Id: 4f2787ac-b385-4170-8379-25aecd67d2e0
	I0906 15:10:34.550059   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.550064   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.550069   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.550073   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.550078   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.550161   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:34.550463   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:34.550468   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:34.550474   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:34.550480   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:34.552575   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:34.552584   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:34.552591   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:34.552596   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:34 GMT
	I0906 15:10:34.552601   28549 round_trippers.go:580]     Audit-Id: 92b5e469-c0b3-4dc0-9c56-09aaa89cc003
	I0906 15:10:34.552605   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:34.552610   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:34.552615   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:34.552666   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.047684   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:35.047703   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.047715   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.047724   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.051578   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:35.051595   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.051603   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.051610   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.051620   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.051635   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.051643   28549 round_trippers.go:580]     Audit-Id: 22bc624f-0864-4015-9f75-f1133bf150ac
	I0906 15:10:35.051653   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.051859   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:35.052234   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:35.052240   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.052246   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.052253   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.054096   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:35.054105   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.054110   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.054115   28549 round_trippers.go:580]     Audit-Id: bb03e78d-1410-42c6-b1ac-54386baca4be
	I0906 15:10:35.054119   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.054124   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.054132   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.054137   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.054185   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.547171   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:35.547191   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.547212   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.547222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.549574   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:35.549584   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.549595   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.549600   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.549604   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.549610   28549 round_trippers.go:580]     Audit-Id: 678334ab-8f6c-47e7-a5e8-dc18f1f08b05
	I0906 15:10:35.549616   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.549624   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.549965   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:35.550262   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:35.550269   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:35.550277   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:35.550283   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:35.552467   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:35.552478   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:35.552486   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:35.552492   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:35.552500   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:35.552506   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:35 GMT
	I0906 15:10:35.552511   28549 round_trippers.go:580]     Audit-Id: 3d2124de-f001-4ec1-9142-5a7d3352e969
	I0906 15:10:35.552515   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:35.552840   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:35.553062   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:36.045660   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:36.045673   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.045688   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.045699   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.048142   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:36.048151   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.048162   28549 round_trippers.go:580]     Audit-Id: 6e3209f9-0db0-4795-896a-f38de8787387
	I0906 15:10:36.048169   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.048179   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.048187   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.048191   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.048197   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.048494   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:36.048785   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:36.048791   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.048796   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.048802   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.050689   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:36.050698   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.050703   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.050708   28549 round_trippers.go:580]     Audit-Id: 6a2b0de6-9fe6-4aa1-856f-d38a6bcf3e5c
	I0906 15:10:36.050712   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.050717   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.050721   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.050726   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.051033   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:36.546262   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:36.546280   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.546292   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.546301   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.549350   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:36.549361   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.549366   28549 round_trippers.go:580]     Audit-Id: 9fa66cb6-3b5e-4e2d-a1f9-cd18131ba438
	I0906 15:10:36.549371   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.549376   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.549380   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.549385   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.549396   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.549598   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:36.549914   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:36.549921   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:36.549926   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:36.549932   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:36.552003   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:36.552011   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:36.552016   28549 round_trippers.go:580]     Audit-Id: ee04c94c-32d4-4998-807f-db8aa6e3b72d
	I0906 15:10:36.552023   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:36.552027   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:36.552032   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:36.552036   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:36.552042   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:36 GMT
	I0906 15:10:36.552087   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:37.047532   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:37.047551   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.047562   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.047572   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.051775   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:37.051785   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.051791   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.051795   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.051802   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.051815   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.051827   28549 round_trippers.go:580]     Audit-Id: 22681f2f-53ec-4702-b6b0-56ea34de7585
	I0906 15:10:37.051836   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.051946   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:37.052233   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:37.052239   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.052245   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.052250   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.054240   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:37.054249   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.054254   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.054260   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.054265   28549 round_trippers.go:580]     Audit-Id: 3c7dd69f-b528-41ce-b06c-66786b2aafc2
	I0906 15:10:37.054270   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.054275   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.054279   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.054333   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:37.547058   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:37.547086   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.547098   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.547107   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.550208   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:37.550220   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.550225   28549 round_trippers.go:580]     Audit-Id: e2c89126-c028-4e0c-bc00-e7539f1a26d1
	I0906 15:10:37.550230   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.550239   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.550245   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.550250   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.550254   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.550313   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:37.550603   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:37.550608   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:37.550614   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:37.550619   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:37.552512   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:37.552521   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:37.552526   28549 round_trippers.go:580]     Audit-Id: 4ee0e1a6-dbc5-4b9d-bea3-00c1f60dc9cf
	I0906 15:10:37.552534   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:37.552540   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:37.552545   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:37.552553   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:37.552559   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:37 GMT
	I0906 15:10:37.552752   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:38.047527   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:38.047543   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.047552   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.047559   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.050269   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.050282   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.050290   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.050295   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.050300   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.050306   28549 round_trippers.go:580]     Audit-Id: 576a0985-641c-457a-8644-259361efd747
	I0906 15:10:38.050312   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.050316   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.050443   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:38.050729   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:38.050736   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.050742   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.050747   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.052979   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.052987   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.052994   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.052998   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.053003   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.053008   28549 round_trippers.go:580]     Audit-Id: cad48e58-3124-45d0-8ad3-133aa9249993
	I0906 15:10:38.053012   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.053017   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.053382   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:38.053556   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:38.545684   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:38.545709   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.545721   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.545730   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.549585   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:38.549598   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.549614   28549 round_trippers.go:580]     Audit-Id: b68bb6d5-0fdd-4822-a879-2fcd9e707b81
	I0906 15:10:38.549632   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.549639   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.549646   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.549674   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.549688   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.549952   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:38.550249   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:38.550260   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:38.550267   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:38.550274   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:38.552337   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:38.552345   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:38.552350   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:38.552354   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:38.552360   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:38.552364   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:38 GMT
	I0906 15:10:38.552368   28549 round_trippers.go:580]     Audit-Id: c9212d3a-c7e7-4e72-bcf0-002c16d22a98
	I0906 15:10:38.552379   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:38.552423   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:39.047659   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:39.047679   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.047691   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.047700   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.050926   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:39.050940   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.050945   28549 round_trippers.go:580]     Audit-Id: c7cb2505-08dd-4a84-b27b-d67fbba54924
	I0906 15:10:39.050950   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.050954   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.050958   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.050962   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.050967   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.051024   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:39.051311   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:39.051317   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.051323   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.051327   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.053035   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:39.053045   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.053060   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.053071   28549 round_trippers.go:580]     Audit-Id: f2ebf6d9-45b4-43ed-9da1-18fd535e79c5
	I0906 15:10:39.053077   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.053088   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.053095   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.053104   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.053375   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:39.545651   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:39.545700   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.545709   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.545716   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.548427   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:39.548438   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.548444   28549 round_trippers.go:580]     Audit-Id: 50e1d6ce-978a-4669-8228-80396ff7e22f
	I0906 15:10:39.548453   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.548460   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.548468   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.548476   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.548483   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.548715   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:39.549007   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:39.549016   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:39.549022   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:39.549027   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:39.550835   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:39.550845   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:39.550851   28549 round_trippers.go:580]     Audit-Id: 481beb84-7cc0-46c0-9065-f2492622bb85
	I0906 15:10:39.550861   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:39.550867   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:39.550874   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:39.550880   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:39.550885   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:39 GMT
	I0906 15:10:39.551309   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.047259   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:40.047284   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.047296   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.047307   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.051101   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:40.051116   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.051131   28549 round_trippers.go:580]     Audit-Id: bade95fa-d7f2-4ac1-9a06-f68fc9daead1
	I0906 15:10:40.051140   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.051147   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.051154   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.051160   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.051166   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.051730   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"743","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6793 chars]
	I0906 15:10:40.052024   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.052030   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.052036   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.052041   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.053899   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.053910   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.053916   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.053921   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.053925   28549 round_trippers.go:580]     Audit-Id: e6de9ba4-7a91-4f74-976f-6eb4c96856ee
	I0906 15:10:40.053930   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.053937   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.053944   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.054186   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.054375   28549 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:10:40.547874   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:10:40.547897   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.547911   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.547922   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.552020   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:40.552032   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.552037   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.552042   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.552046   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.552050   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.552055   28549 round_trippers.go:580]     Audit-Id: faa665bc-4607-45c0-b283-c0d0a3f40061
	I0906 15:10:40.552060   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.552117   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6564 chars]
	I0906 15:10:40.552408   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.552414   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.552421   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.552427   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.554339   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.554348   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.554353   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.554358   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.554363   28549 round_trippers.go:580]     Audit-Id: 10600d2f-bbcb-485e-9490-30df3093b6fb
	I0906 15:10:40.554367   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.554372   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.554376   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.554475   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.554658   28549 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.554667   28549 pod_ready.go:81] duration metric: took 18.405611621s waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.554673   28549 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.554698   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:10:40.554702   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.554708   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.554714   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.556540   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.556549   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.556555   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.556560   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.556565   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.556569   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.556574   28549 round_trippers.go:580]     Audit-Id: 51e73d65-bf86-4fbd-8df5-6011368361db
	I0906 15:10:40.556578   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.556663   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"765","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 6113 chars]
	I0906 15:10:40.556875   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.556880   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.556888   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.556894   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.558614   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.558622   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.558627   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.558632   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.558637   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.558641   28549 round_trippers.go:580]     Audit-Id: bec0a7c4-a810-4897-aa54-080e7a79cd84
	I0906 15:10:40.558646   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.558650   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.558701   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.558880   28549 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.558885   28549 pod_ready.go:81] duration metric: took 4.207291ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.558895   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.558923   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:10:40.558927   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.558933   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.558939   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.560775   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.560784   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.560790   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.560795   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.560800   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.560804   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.560810   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.560814   28549 round_trippers.go:580]     Audit-Id: 899bf322-5d44-4e62-b581-cae28da40437
	I0906 15:10:40.561103   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"793","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8471 chars]
	I0906 15:10:40.561368   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.561374   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.561381   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.561388   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.563252   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.563259   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.563264   28549 round_trippers.go:580]     Audit-Id: 00c110af-9bcc-43ce-9a93-1f6997127e1b
	I0906 15:10:40.563269   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.563274   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.563281   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.563287   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.563291   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.563337   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.563523   28549 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.563529   28549 pod_ready.go:81] duration metric: took 4.629465ms waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.563535   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.563559   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:10:40.563563   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.563569   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.563574   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.565277   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.565286   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.565291   28549 round_trippers.go:580]     Audit-Id: 6f5a9db0-e549-4203-8cbd-cb94fcae6727
	I0906 15:10:40.565297   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.565301   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.565306   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.565310   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.565315   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.565371   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"768","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 8044 chars]
	I0906 15:10:40.565617   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.565622   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.565628   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.565633   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.567245   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.567257   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.567262   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.567268   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.567272   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.567277   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.567282   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.567286   28549 round_trippers.go:580]     Audit-Id: 17d3fb77-1ff2-4b0a-a6a7-b40f88e027a4
	I0906 15:10:40.567355   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.567535   28549 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.567543   28549 pod_ready.go:81] duration metric: took 4.002983ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.567551   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.567576   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:10:40.567580   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.567585   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.567591   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.569377   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.569385   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.569390   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.569397   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.569401   28549 round_trippers.go:580]     Audit-Id: 056f7804-ce91-4dd8-a5ca-ac09f2de9214
	I0906 15:10:40.569405   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.569410   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.569415   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.569457   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"672","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5772 chars]
	I0906 15:10:40.569692   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:10:40.569698   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.569704   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.569709   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.571346   28549 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:10:40.571806   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.571821   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.571826   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.571833   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.571840   28549 round_trippers.go:580]     Audit-Id: 494f365f-854e-46f0-a8c1-9e5e2539cb8b
	I0906 15:10:40.571847   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.571852   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.571982   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m03","uid":"268cefad-05d1-4e4b-b44e-2d8678e78e39","resourceVersion":"685","creationTimestamp":"2022-09-06T22:09:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:09:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostnam
e":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Upd [truncated 4408 chars]
	I0906 15:10:40.572363   28549 pod_ready.go:92] pod "kube-proxy-czbjx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.572372   28549 pod_ready.go:81] duration metric: took 4.815433ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.572386   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.748168   28549 request.go:533] Waited for 175.735325ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:40.748217   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:10:40.748225   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.748269   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.748285   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.752135   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:40.752151   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.752158   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.752165   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.752171   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.752177   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.752183   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.752190   28549 round_trippers.go:580]     Audit-Id: ed592aff-d284-4033-90e6-f21d3a7c3d5a
	I0906 15:10:40.752261   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"749","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5762 chars]
	I0906 15:10:40.949108   28549 request.go:533] Waited for 196.461342ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.949199   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:40.949206   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:40.949222   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:40.949230   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:40.952211   28549 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:10:40.952223   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:40.952229   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:40.952238   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:40.952243   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:40 GMT
	I0906 15:10:40.952248   28549 round_trippers.go:580]     Audit-Id: b341cdee-9db9-4707-8c63-e9c124efc28f
	I0906 15:10:40.952254   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:40.952259   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:40.952439   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:40.952640   28549 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:40.952647   28549 pod_ready.go:81] duration metric: took 380.254549ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:40.952653   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.148076   28549 request.go:533] Waited for 195.384766ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:41.148153   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:10:41.148164   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.148175   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.148186   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.151456   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.151466   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.151471   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.151476   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.151481   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.151486   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.151491   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.151495   28549 round_trippers.go:580]     Audit-Id: ca28b027-c218-4c1d-81b5-0d3f8e13d505
	I0906 15:10:41.151545   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"476","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5565 chars]
	I0906 15:10:41.348714   28549 request.go:533] Waited for 196.910755ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:41.348783   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:41.348795   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.348806   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.348818   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.352555   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.352573   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.352580   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.352587   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.352594   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.352600   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.352606   28549 round_trippers.go:580]     Audit-Id: 1097b0c1-4d2a-494b-9ede-60cc95bcb0f8
	I0906 15:10:41.352611   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.352681   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05","resourceVersion":"602","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4591 chars]
	I0906 15:10:41.352956   28549 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:41.352986   28549 pod_ready.go:81] duration metric: took 400.326262ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.352993   28549 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.549910   28549 request.go:533] Waited for 196.88201ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:41.549972   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:10:41.550005   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.550019   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.550032   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.553706   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.553722   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.553730   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.553738   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.553746   28549 round_trippers.go:580]     Audit-Id: dc707c21-d82b-4758-b0ae-8f0ce57bdcb2
	I0906 15:10:41.553752   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.553759   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.553766   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.553858   28549 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"780","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4928 chars]
	I0906 15:10:41.749983   28549 request.go:533] Waited for 195.790388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:41.750090   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:10:41.750098   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.750114   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.750132   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.753925   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:41.753940   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.753947   28549 round_trippers.go:580]     Audit-Id: 2ca9c8db-f653-45d7-a86c-f23683ebdd7e
	I0906 15:10:41.753953   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.753960   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.753966   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.753972   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.753978   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.754225   28549 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09 [truncated 5376 chars]
	I0906 15:10:41.754489   28549 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:10:41.754500   28549 pod_ready.go:81] duration metric: took 401.499125ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:10:41.754509   28549 pod_ready.go:38] duration metric: took 19.778479428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:10:41.754528   28549 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:10:41.754583   28549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:10:41.763591   28549 command_runner.go:130] > 1664
	I0906 15:10:41.764358   28549 api_server.go:71] duration metric: took 20.008105359s to wait for apiserver process to appear ...
	I0906 15:10:41.764367   28549 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:10:41.764374   28549 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57200/healthz ...
	I0906 15:10:41.770061   28549 api_server.go:266] https://127.0.0.1:57200/healthz returned 200:
	ok
	I0906 15:10:41.770090   28549 round_trippers.go:463] GET https://127.0.0.1:57200/version
	I0906 15:10:41.770095   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.770101   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.770108   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.770961   28549 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 15:10:41.770970   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.770975   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.770980   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.770985   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.770989   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.770994   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:41.770999   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.771004   28549 round_trippers.go:580]     Audit-Id: fd3a2c9b-f6d4-4525-a3eb-399fa18c42e3
	I0906 15:10:41.771106   28549 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:10:41.771131   28549 api_server.go:140] control plane version: v1.25.0
	I0906 15:10:41.771137   28549 api_server.go:130] duration metric: took 6.765742ms to wait for apiserver health ...
	I0906 15:10:41.771142   28549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:10:41.949934   28549 request.go:533] Waited for 178.751849ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:41.949975   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:41.949986   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:41.949999   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:41.950044   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:41.955316   28549 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:10:41.955328   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:41.955334   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:41.955344   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:41 GMT
	I0906 15:10:41.955350   28549 round_trippers.go:580]     Audit-Id: 2bb44cf3-a985-49e9-9a82-5a328c0b13b2
	I0906 15:10:41.955354   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:41.955360   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:41.955368   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:41.956655   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85340 chars]
	I0906 15:10:41.958514   28549 system_pods.go:59] 12 kube-system pods found
	I0906 15:10:41.958526   28549 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:41.958530   28549 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:41.958534   28549 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:41.958537   28549 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:41.958541   28549 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:41.958544   28549 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:10:41.958548   28549 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:41.958552   28549 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:41.958555   28549 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:41.958558   28549 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:41.958562   28549 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:10:41.958569   28549 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:10:41.958574   28549 system_pods.go:74] duration metric: took 187.427949ms to wait for pod list to return data ...
	I0906 15:10:41.958579   28549 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:10:42.148967   28549 request.go:533] Waited for 190.331771ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/default/serviceaccounts
	I0906 15:10:42.149107   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/default/serviceaccounts
	I0906 15:10:42.149115   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.149124   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.149132   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.152372   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:42.152385   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.152390   28549 round_trippers.go:580]     Content-Length: 261
	I0906 15:10:42.152396   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.152402   28549 round_trippers.go:580]     Audit-Id: bdb3c494-9d48-4d7d-98a3-9f0dff362ae9
	I0906 15:10:42.152408   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.152415   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.152422   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.152427   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.152469   28549 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2535e7c3-51eb-44d2-8df8-c188db57dc73","resourceVersion":"310","creationTimestamp":"2022-09-06T22:06:47Z"}}]}
	I0906 15:10:42.152598   28549 default_sa.go:45] found service account: "default"
	I0906 15:10:42.152605   28549 default_sa.go:55] duration metric: took 194.021479ms for default service account to be created ...
	I0906 15:10:42.152610   28549 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:10:42.350016   28549 request.go:533] Waited for 197.352364ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:42.350052   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/namespaces/kube-system/pods
	I0906 15:10:42.350058   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.350096   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.350127   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.354324   28549 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:10:42.354336   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.354342   28549 round_trippers.go:580]     Audit-Id: fdfa76a1-8991-489f-9d02-70af290c9326
	I0906 15:10:42.354348   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.354355   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.354361   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.354366   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.354371   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.356027   28549 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85340 chars]
	I0906 15:10:42.357887   28549 system_pods.go:86] 12 kube-system pods found
	I0906 15:10:42.357897   28549 system_pods.go:89] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:10:42.357902   28549 system_pods.go:89] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:10:42.357907   28549 system_pods.go:89] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:10:42.357910   28549 system_pods.go:89] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:10:42.357915   28549 system_pods.go:89] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:10:42.357918   28549 system_pods.go:89] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:10:42.357923   28549 system_pods.go:89] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:10:42.357927   28549 system_pods.go:89] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:10:42.357931   28549 system_pods.go:89] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:10:42.357947   28549 system_pods.go:89] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:10:42.357953   28549 system_pods.go:89] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:10:42.357960   28549 system_pods.go:89] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:10:42.357968   28549 system_pods.go:126] duration metric: took 205.352812ms to wait for k8s-apps to be running ...
	I0906 15:10:42.357974   28549 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:10:42.358022   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:10:42.367066   28549 system_svc.go:56] duration metric: took 9.086665ms WaitForService to wait for kubelet.
	I0906 15:10:42.367077   28549 kubeadm.go:573] duration metric: took 20.610823777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:10:42.367089   28549 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:10:42.548377   28549 request.go:533] Waited for 181.186086ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:42.548416   28549 round_trippers.go:463] GET https://127.0.0.1:57200/api/v1/nodes
	I0906 15:10:42.548425   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:42.548435   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:42.548446   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:42.552376   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:42.552389   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:42.552395   28549 round_trippers.go:580]     Audit-Id: 9780444a-e8d4-40ee-b5af-fe67a45dd214
	I0906 15:10:42.552399   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:42.552405   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:42.552410   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:42.552414   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:42.552419   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:42 GMT
	I0906 15:10:42.552533   28549 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"703","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16412 chars]
	I0906 15:10:42.552939   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552946   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552954   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552957   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552960   28549 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:10:42.552963   28549 node_conditions.go:123] node cpu capacity is 6
	I0906 15:10:42.552966   28549 node_conditions.go:105] duration metric: took 185.873701ms to run NodePressure ...
	I0906 15:10:42.552975   28549 start.go:216] waiting for startup goroutines ...
	I0906 15:10:42.553586   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:42.553649   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:42.575662   28549 out.go:177] * Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	I0906 15:10:42.618531   28549 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:10:42.639337   28549 out.go:177] * Pulling base image ...
	I0906 15:10:42.681568   28549 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:10:42.681575   28549 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:10:42.681600   28549 cache.go:57] Caching tarball of preloaded images
	I0906 15:10:42.681762   28549 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:10:42.681782   28549 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:10:42.681908   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:42.745010   28549 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:10:42.745033   28549 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:10:42.745043   28549 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:10:42.745103   28549 start.go:364] acquiring machines lock for multinode-20220906150606-22187-m02: {Name:mk634e5142ae9a72af4ccf4e417277befcfbdc1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:10:42.745169   28549 start.go:368] acquired machines lock for "multinode-20220906150606-22187-m02" in 55.286µs
	I0906 15:10:42.745185   28549 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:10:42.745190   28549 fix.go:55] fixHost starting: m02
	I0906 15:10:42.745433   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:10:42.809416   28549 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187-m02: state=Stopped err=<nil>
	W0906 15:10:42.809436   28549 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:10:42.831180   28549 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	I0906 15:10:42.852985   28549 cli_runner.go:164] Run: docker start multinode-20220906150606-22187-m02
	I0906 15:10:43.188246   28549 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:10:43.254114   28549 kic.go:415] container "multinode-20220906150606-22187-m02" state is running.
	I0906 15:10:43.254669   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:43.322971   28549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:10:43.323422   28549 machine.go:88] provisioning docker machine ...
	I0906 15:10:43.323435   28549 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187-m02"
	I0906 15:10:43.323493   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:43.391970   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:43.392155   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:43.392171   28549 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187-m02 && echo "multinode-20220906150606-22187-m02" | sudo tee /etc/hostname
	I0906 15:10:43.532110   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187-m02
	
	I0906 15:10:43.532191   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:43.597549   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:43.597725   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:43.597741   28549 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:10:43.712509   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:10:43.712526   28549 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:10:43.712537   28549 ubuntu.go:177] setting up certificates
	I0906 15:10:43.712547   28549 provision.go:83] configureAuth start
	I0906 15:10:43.712618   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:43.778739   28549 provision.go:138] copyHostCerts
	I0906 15:10:43.778803   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:10:43.778881   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:10:43.778892   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:10:43.778984   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:10:43.779145   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:10:43.779211   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:10:43.779217   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:10:43.779277   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:10:43.779395   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:10:43.779422   28549 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:10:43.779427   28549 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:10:43.779483   28549 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:10:43.779601   28549 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187-m02]
	I0906 15:10:43.968716   28549 provision.go:172] copyRemoteCerts
	I0906 15:10:43.968773   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:10:43.968815   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.035889   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:44.132361   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:10:44.132426   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:10:44.151009   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:10:44.151085   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0906 15:10:44.167814   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:10:44.167874   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:10:44.184664   28549 provision.go:86] duration metric: configureAuth took 472.106773ms
	I0906 15:10:44.184678   28549 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:10:44.184844   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:44.184904   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.249656   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.249831   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.249841   28549 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:10:44.364803   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:10:44.364819   28549 ubuntu.go:71] root file system type: overlay
	I0906 15:10:44.364963   28549 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:10:44.365039   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.428787   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.428933   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.428985   28549 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:10:44.552546   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:10:44.552616   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.619197   28549 main.go:134] libmachine: Using SSH client type: native
	I0906 15:10:44.619357   28549 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57230 <nil> <nil>}
	I0906 15:10:44.619370   28549 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:10:44.736783   28549 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:10:44.736807   28549 machine.go:91] provisioned docker machine in 1.413364256s
	I0906 15:10:44.736814   28549 start.go:300] post-start starting for "multinode-20220906150606-22187-m02" (driver="docker")
	I0906 15:10:44.736822   28549 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:10:44.736883   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:10:44.736926   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.801413   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:44.881207   28549 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:10:44.884507   28549 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:10:44.884517   28549 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:10:44.884522   28549 command_runner.go:130] > ID=ubuntu
	I0906 15:10:44.884528   28549 command_runner.go:130] > ID_LIKE=debian
	I0906 15:10:44.884533   28549 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:10:44.884537   28549 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:10:44.884541   28549 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:10:44.884547   28549 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:10:44.884554   28549 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:10:44.884564   28549 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:10:44.884570   28549 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:10:44.884580   28549 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:10:44.884681   28549 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:10:44.884695   28549 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:10:44.884704   28549 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:10:44.884710   28549 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:10:44.884716   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:10:44.884820   28549 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:10:44.884956   28549 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:10:44.884964   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:10:44.885135   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:10:44.892218   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:10:44.908508   28549 start.go:303] post-start completed in 171.683253ms
	I0906 15:10:44.908571   28549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:10:44.908621   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:44.972115   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.052330   28549 command_runner.go:130] > 12%!
	(MISSING)I0906 15:10:45.052779   28549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:10:45.056934   28549 command_runner.go:130] > 49G
	I0906 15:10:45.057219   28549 fix.go:57] fixHost completed within 2.312018763s
	I0906 15:10:45.057231   28549 start.go:83] releasing machines lock for "multinode-20220906150606-22187-m02", held for 2.312047126s
	I0906 15:10:45.057313   28549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:10:45.142585   28549 out.go:177] * Found network options:
	I0906 15:10:45.163662   28549 out.go:177]   - NO_PROXY=192.168.58.2
	W0906 15:10:45.184811   28549 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 15:10:45.184863   28549 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 15:10:45.185006   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:10:45.185017   28549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:10:45.185059   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:45.185095   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:10:45.253383   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.253502   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57230 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:10:45.380356   28549 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:10:45.382029   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:10:45.397509   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:45.465920   28549 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:10:45.547928   28549 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:10:45.558771   28549 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:10:45.558783   28549 command_runner.go:130] > [Unit]
	I0906 15:10:45.558789   28549 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:10:45.558793   28549 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:10:45.558798   28549 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:10:45.558805   28549 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:10:45.558811   28549 command_runner.go:130] > Wants=network-online.target
	I0906 15:10:45.558819   28549 command_runner.go:130] > Requires=docker.socket
	I0906 15:10:45.558825   28549 command_runner.go:130] > StartLimitBurst=3
	I0906 15:10:45.558832   28549 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:10:45.558836   28549 command_runner.go:130] > [Service]
	I0906 15:10:45.558840   28549 command_runner.go:130] > Type=notify
	I0906 15:10:45.558843   28549 command_runner.go:130] > Restart=on-failure
	I0906 15:10:45.558847   28549 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0906 15:10:45.558853   28549 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:10:45.558861   28549 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:10:45.558867   28549 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:10:45.558874   28549 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:10:45.558888   28549 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:10:45.558894   28549 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:10:45.558900   28549 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:10:45.558909   28549 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:10:45.558916   28549 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:10:45.558920   28549 command_runner.go:130] > ExecStart=
	I0906 15:10:45.558933   28549 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:10:45.558937   28549 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:10:45.558943   28549 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:10:45.558948   28549 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:10:45.558952   28549 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:10:45.558955   28549 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:10:45.558958   28549 command_runner.go:130] > LimitCORE=infinity
	I0906 15:10:45.558963   28549 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:10:45.558967   28549 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:10:45.558971   28549 command_runner.go:130] > TasksMax=infinity
	I0906 15:10:45.558979   28549 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:10:45.558984   28549 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:10:45.558988   28549 command_runner.go:130] > Delegate=yes
	I0906 15:10:45.558998   28549 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:10:45.559001   28549 command_runner.go:130] > KillMode=process
	I0906 15:10:45.559006   28549 command_runner.go:130] > [Install]
	I0906 15:10:45.559010   28549 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:10:45.559711   28549 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:10:45.559761   28549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:10:45.568579   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:10:45.580371   28549 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:10:45.580381   28549 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:10:45.581381   28549 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:10:45.654593   28549 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:10:45.730421   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:45.797805   28549 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:10:46.009555   28549 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:10:46.075810   28549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:10:46.143357   28549 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:10:46.152782   28549 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:10:46.152854   28549 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:10:46.156531   28549 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:10:46.156543   28549 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:10:46.156566   28549 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 131         Links: 1
	I0906 15:10:46.156576   28549 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:10:46.156594   28549 command_runner.go:130] > Access: 2022-09-06 22:10:46.032110218 +0000
	I0906 15:10:46.156604   28549 command_runner.go:130] > Modify: 2022-09-06 22:10:45.483110267 +0000
	I0906 15:10:46.156611   28549 command_runner.go:130] > Change: 2022-09-06 22:10:45.484110267 +0000
	I0906 15:10:46.156616   28549 command_runner.go:130] >  Birth: -
	I0906 15:10:46.156701   28549 start.go:471] Will wait 60s for crictl version
	I0906 15:10:46.156746   28549 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:10:46.184350   28549 command_runner.go:130] > Version:  0.1.0
	I0906 15:10:46.184362   28549 command_runner.go:130] > RuntimeName:  docker
	I0906 15:10:46.184483   28549 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:10:46.184659   28549 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:10:46.187317   28549 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:10:46.187380   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:10:46.219973   28549 command_runner.go:130] > 20.10.17
	I0906 15:10:46.223574   28549 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:10:46.256593   28549 command_runner.go:130] > 20.10.17
	I0906 15:10:46.302167   28549 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:10:46.324046   28549 out.go:177]   - env NO_PROXY=192.168.58.2
	I0906 15:10:46.345395   28549 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187-m02 dig +short host.docker.internal
	I0906 15:10:46.462614   28549 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:10:46.462714   28549 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:10:46.466882   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:10:46.476235   28549 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.3
	I0906 15:10:46.476355   28549 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:10:46.476403   28549 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:10:46.476410   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:10:46.476431   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:10:46.476448   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:10:46.476464   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:10:46.476592   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:10:46.476634   28549 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:10:46.476645   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:10:46.476691   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:10:46.476725   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:10:46.476754   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:10:46.476817   28549 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:10:46.476853   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.476872   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.476886   28549 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.477195   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:10:46.495550   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:10:46.514404   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:10:46.531072   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:10:46.548321   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:10:46.564579   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:10:46.580884   28549 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:10:46.597302   28549 ssh_runner.go:195] Run: openssl version
	I0906 15:10:46.602256   28549 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:10:46.602613   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:10:46.610227   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.613913   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.614072   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.614115   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:10:46.618969   28549 command_runner.go:130] > 3ec20f2e
	I0906 15:10:46.619205   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:10:46.626953   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:10:46.634854   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638630   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638752   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.638798   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:10:46.643667   28549 command_runner.go:130] > b5213941
	I0906 15:10:46.644135   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:10:46.651147   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:10:46.658802   28549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662678   28549 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662755   28549 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.662801   28549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:10:46.667598   28549 command_runner.go:130] > 51391683
	I0906 15:10:46.667930   28549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:10:46.675148   28549 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:10:46.746440   28549 command_runner.go:130] > systemd
	I0906 15:10:46.751729   28549 cni.go:95] Creating CNI manager for ""
	I0906 15:10:46.751748   28549 cni.go:156] 3 nodes found, recommending kindnet
	I0906 15:10:46.751770   28549 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:10:46.751813   28549 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:10:46.751910   28549 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:10:46.751969   28549 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:10:46.752025   28549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:10:46.759093   28549 command_runner.go:130] > kubeadm
	I0906 15:10:46.759104   28549 command_runner.go:130] > kubectl
	I0906 15:10:46.759112   28549 command_runner.go:130] > kubelet
	I0906 15:10:46.759905   28549 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:10:46.759960   28549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0906 15:10:46.766908   28549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0906 15:10:46.779093   28549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:10:46.792909   28549 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:10:46.796497   28549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:10:46.805780   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:46.805960   28549 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:10:46.805965   28549 start.go:285] JoinCluster: &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:10:46.806029   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 15:10:46.806072   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:46.869829   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:47.000276   28549 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:10:47.000317   28549 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:47.000337   28549 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:10:47.000607   28549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0906 15:10:47.000650   28549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:10:47.066185   28549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:10:47.183827   28549 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0906 15:10:47.212638   28549 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cddz8, kube-system/kube-proxy-wnrrx
	I0906 15:10:50.222387   28549 command_runner.go:130] > node/multinode-20220906150606-22187-m02 cordoned
	I0906 15:10:50.222406   28549 command_runner.go:130] > pod "busybox-65db55d5d6-ppptb" has DeletionTimestamp older than 1 seconds, skipping
	I0906 15:10:50.222411   28549 command_runner.go:130] > node/multinode-20220906150606-22187-m02 drained
	I0906 15:10:50.222426   28549 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.221792359s)
	I0906 15:10:50.222438   28549 node.go:109] successfully drained node "m02"
	I0906 15:10:50.222760   28549 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:10:50.222980   28549 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:10:50.223238   28549 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0906 15:10:50.223263   28549 round_trippers.go:463] DELETE https://127.0.0.1:57200/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:10:50.223267   28549 round_trippers.go:469] Request Headers:
	I0906 15:10:50.223273   28549 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:10:50.223280   28549 round_trippers.go:473]     Content-Type: application/json
	I0906 15:10:50.223288   28549 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:10:50.227252   28549 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:10:50.227266   28549 round_trippers.go:577] Response Headers:
	I0906 15:10:50.227275   28549 round_trippers.go:580]     Audit-Id: 8493c1a8-8349-4bb8-9e0c-5e91482b57d7
	I0906 15:10:50.227283   28549 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:10:50.227288   28549 round_trippers.go:580]     Content-Type: application/json
	I0906 15:10:50.227295   28549 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:10:50.227300   28549 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:10:50.227304   28549 round_trippers.go:580]     Content-Length: 185
	I0906 15:10:50.227309   28549 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:10:50 GMT
	I0906 15:10:50.227322   28549 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220906150606-22187-m02","kind":"nodes","uid":"4f069859-75f2-4e6f-a5c1-5cceb9510b05"}}
	I0906 15:10:50.227344   28549 node.go:125] successfully deleted node "m02"
	I0906 15:10:50.227352   28549 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:50.227363   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:10:50.227376   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:10:50.290880   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:10:50.401454   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:10:50.401470   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:10:50.421485   28549 command_runner.go:130] ! W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:10:50.421498   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:10:50.421518   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:10:50.421524   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:10:50.421534   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:10:50.421542   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:10:50.421553   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:10:50.421560   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:10:50.421590   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.421603   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:10:50.421612   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:10:50.458648   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:10:50.458670   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.458696   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:10:50.458716   28549 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:10:50.299616    1105 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.505493   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:01.505540   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:01.541153   28549 command_runner.go:130] ! W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:01.541381   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:01.565651   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:01.570349   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:01.630836   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:01.630848   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:01.654802   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:01.654814   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.657936   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:01.657949   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:01.657956   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:11:01.657990   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.657998   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:01.658009   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:01.692431   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:01.692452   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.692474   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:01.692487   28549 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:01.557994    1472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.301254   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:23.301339   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:23.336882   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:23.435229   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:23.435245   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:11:23.452875   28549 command_runner.go:130] ! W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:23.452889   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:23.452898   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:23.452911   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:23.452918   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:23.452924   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:23.452934   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:23.452941   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:11:23.452974   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.452981   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:23.452988   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:23.490281   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:23.490294   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.490309   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:23.490321   28549 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:23.347792    1851 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.694943   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:11:49.694987   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:11:49.730356   28549 command_runner.go:130] ! W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:11:49.730479   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:11:49.753632   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:11:49.758227   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:11:49.814483   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:11:49.814497   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:11:49.839413   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:11:49.839426   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.842899   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:11:49.842911   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:11:49.842917   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:11:49.842942   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.842953   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:11:49.842964   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:11:49.879137   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:11:49.879154   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.879169   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:11:49.879179   28549 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:11:49.738584    2107 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.528439   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:12:21.528491   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:12:21.563322   28549 command_runner.go:130] ! W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:12:21.563499   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:12:21.591204   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:12:21.595725   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:12:21.650881   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:12:21.650910   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:12:21.674745   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:12:21.674757   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.677651   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:12:21.677663   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:12:21.677670   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:12:21.677703   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.677711   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:12:21.677719   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:12:21.714343   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:12:21.714359   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.714380   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:12:21.714391   28549 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:12:21.572810    2419 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.524499   28549 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:13:08.524545   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:13:08.561083   28549 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:13:08.658459   28549 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:13:08.658486   28549 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:13:08.678063   28549 command_runner.go:130] ! W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:08.678077   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:13:08.678089   28549 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:13:08.678096   28549 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:13:08.678102   28549 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:13:08.678108   28549 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:13:08.678118   28549 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:13:08.678123   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:13:08.678154   28549 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.678162   28549 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:13:08.678170   28549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:13:08.715429   28549 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:13:08.715448   28549 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.715473   28549 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:13:08.715491   28549 start.go:287] JoinCluster complete in 2m21.909026943s
	I0906 15:13:08.737406   28549 out.go:177] 
	W0906 15:13:08.758737   28549 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i1bwf8.we85hfijeqbb14x8 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:13:08.561423    2827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:13:08.758769   28549 out.go:239] * 
	W0906 15:13:08.759858   28549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:13:08.843265   28549 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:09:53 UTC, end at Tue 2022-09-06 22:13:10 UTC. --
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[133]: time="2022-09-06T22:09:55.908235603Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[133]: time="2022-09-06T22:09:55.908798395Z" level=info msg="Daemon shutdown complete"
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[133]: time="2022-09-06T22:09:55.908872799Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 22:09:55 multinode-20220906150606-22187 systemd[1]: docker.service: Succeeded.
	Sep 06 22:09:55 multinode-20220906150606-22187 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 22:09:55 multinode-20220906150606-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.962859315Z" level=info msg="Starting up"
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.964484334Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.964557175Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.964606785Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.964649595Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.965887645Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.965917585Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.965929177Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.965935124Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.969438876Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:09:55 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:55.974672832Z" level=info msg="Loading containers: start."
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.073818181Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.106014208Z" level=info msg="Loading containers: done."
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.114574857Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.114643424Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:09:56 multinode-20220906150606-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.139000624Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:09:56 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:09:56.142311825Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:10:37 multinode-20220906150606-22187 dockerd[621]: time="2022-09-06T22:10:37.337962371Z" level=info msg="ignoring event" container=11d34d183821ea54724b06cc0209a8a7d4bde061bfd1a789951bbdb6da272f01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	167b4a4f33064       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   7596442e53b5e
	06ab6cf627e88       d921cee849482                                                                                         3 minutes ago       Running             kindnet-cni               1                   c1eee0e53b49b
	d759aa3a43843       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   0f037fd738e3b
	803ede0924699       58a9a0c6d96f2                                                                                         3 minutes ago       Running             kube-proxy                1                   e266c748731b9
	af277a5518c67       5185b96f0becf                                                                                         3 minutes ago       Running             coredns                   1                   4f1337150041c
	11d34d183821e       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   7596442e53b5e
	4c8a1f372186f       1a54c86c03a67                                                                                         3 minutes ago       Running             kube-controller-manager   1                   9456ca1d4c44a
	3c8f51d8691c7       a8a176a5d5d69                                                                                         3 minutes ago       Running             etcd                      1                   22c8f9d461788
	ef78db90e1cfa       bef2cf3115095                                                                                         3 minutes ago       Running             kube-scheduler            1                   8cecea8208ec0
	62ca7e8901de2       4d2edfd10d3e3                                                                                         3 minutes ago       Running             kube-apiserver            1                   c20d3976c12a9
	a9b289a825793       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Exited              busybox                   0                   eccfa27c6a596
	df0852bc7a514       5185b96f0becf                                                                                         6 minutes ago       Exited              coredns                   0                   a34f733a43c26
	3c20933150542       kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb              6 minutes ago       Exited              kindnet-cni               0                   6bd8b364f108c
	fdc326cd3c6a4       58a9a0c6d96f2                                                                                         6 minutes ago       Exited              kube-proxy                0                   4e3670b1600d7
	6d68f544bf545       a8a176a5d5d69                                                                                         6 minutes ago       Exited              etcd                      0                   a165f2074320f
	28bc9837a5104       bef2cf3115095                                                                                         6 minutes ago       Exited              kube-scheduler            0                   0c0974b47f92c
	33a1b253bd371       4d2edfd10d3e3                                                                                         6 minutes ago       Exited              kube-apiserver            0                   c27dff0f48e6b
	77d6030ab01b9       1a54c86c03a67                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   defb450e84c2b
	
	* 
	* ==> coredns [af277a5518c6] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [df0852bc7a51] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220906150606-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220906150606-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=multinode-20220906150606-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_06_36_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:06:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220906150606-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:13:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:10:05 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:10:05 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:10:05 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:10:05 +0000   Tue, 06 Sep 2022 22:07:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-20220906150606-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                ece1d71d-4751-4899-8609-9a55b2eb3fdc
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-trdqs                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 coredns-565d847f94-t6l66                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     6m23s
	  kube-system                 etcd-multinode-20220906150606-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  kube-system                 kindnet-nh9r5                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-multinode-20220906150606-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-multinode-20220906150606-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-proxy-kkmpm                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-multinode-20220906150606-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m22s                  kube-proxy       
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m50s (x4 over 6m50s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x4 over 6m50s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x4 over 6m50s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    6m35s                  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s                  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m35s                  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m24s                  node-controller  Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node multinode-20220906150606-22187 status is now: NodeReady
	  Normal  Starting                 3m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m10s (x8 over 3m10s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s (x8 over 3m10s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s (x7 over 3m10s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m54s                  node-controller  Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller
	
	
	Name:               multinode-20220906150606-22187-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220906150606-22187-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:10:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220906150606-22187-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:10:50 +0000   Tue, 06 Sep 2022 22:10:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:10:50 +0000   Tue, 06 Sep 2022 22:10:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:10:50 +0000   Tue, 06 Sep 2022 22:10:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:10:50 +0000   Tue, 06 Sep 2022 22:10:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-20220906150606-22187-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                9b18602e-693b-4709-ad03-6dd20ccb7ab5
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cddz8       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m38s
	  kube-system                 kube-proxy-wnrrx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m27s                  kube-proxy  
	  Normal  Starting                 2m6s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m51s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m51s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m28s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m28s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m28s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m28s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-20220906150606-22187-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220906150606-22187-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:09:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220906150606-22187-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:09:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Sep 2022 22:09:12 +0000   Tue, 06 Sep 2022 22:10:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Sep 2022 22:09:12 +0000   Tue, 06 Sep 2022 22:10:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Sep 2022 22:09:12 +0000   Tue, 06 Sep 2022 22:10:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Sep 2022 22:09:12 +0000   Tue, 06 Sep 2022 22:10:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-20220906150606-22187-m03
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                629d0108-7ab7-4732-836f-d37d68dd9685
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-qmjcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-jkg8p               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m58s
	  kube-system                 kube-proxy-czbjx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s (x2 over 4m58s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x2 over 4m58s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x2 over 4m58s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m48s                  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x2 over 4m10s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x2 over 4m10s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x2 over 4m10s)  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m59s                  kubelet          Node multinode-20220906150606-22187-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m54s                  node-controller  Node multinode-20220906150606-22187-m03 event: Registered Node multinode-20220906150606-22187-m03 in Controller
	  Normal  NodeNotReady             2m14s                  node-controller  Node multinode-20220906150606-22187-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.001536] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001105] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001751] FS-Cache: N-cookie d=000000006f57a5f8 n=0000000004119ae2
	[  +0.001424] FS-Cache: N-key=[8] '89c5800300000000'
	[  +0.002109] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000d596ead8 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001797] FS-Cache: O-cookie d=000000006f57a5f8 n=00000000f83b458d
	[  +0.001466] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001134] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001810] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001458] FS-Cache: N-key=[8] '89c5800300000000'
	[  +3.680989] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=000000003a8c8805 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000057637cac
	[  +0.001460] FS-Cache: O-key=[8] '88c5800300000000'
	[  +0.001144] FS-Cache: N-cookie c=000000000ab19587 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001761] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001454] FS-Cache: N-key=[8] '88c5800300000000'
	[  +0.676412] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000dd15d770 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000060e892c8
	[  +0.001441] FS-Cache: O-key=[8] '93c5800300000000'
	[  +0.001122] FS-Cache: N-cookie c=00000000e728d4f6 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=000000006f57a5f8 n=000000009b87565f
	[  +0.001438] FS-Cache: N-key=[8] '93c5800300000000'
	
	* 
	* ==> etcd [3c8f51d8691c] <==
	* {"level":"info","ts":"2022-09-06T22:10:02.447Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:10:02.448Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-09-06T22:10:02.448Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:10:03.641Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:10:03.641Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220906150606-22187 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:10:03.643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:10:03.643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [6d68f544bf54] <==
	* {"level":"info","ts":"2022-09-06T22:06:30.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:06:30.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-09-06T22:06:30.127Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220906150606-22187 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:06:30.127Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:06:30.127Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:06:30.128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:06:30.128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:06:30.128Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-09-06T22:06:30.128Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:06:30.129Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:06:30.131Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:06:30.131Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:06:30.131Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:07:13.252Z","caller":"traceutil/trace.go:171","msg":"trace[1545517756] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"200.950373ms","start":"2022-09-06T22:07:13.051Z","end":"2022-09-06T22:07:13.252Z","steps":["trace[1545517756] 'process raft request'  (duration: 200.442616ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-06T22:08:05.575Z","caller":"traceutil/trace.go:171","msg":"trace[1870070703] linearizableReadLoop","detail":"{readStateIndex:564; appliedIndex:564; }","duration":"135.52665ms","start":"2022-09-06T22:08:05.439Z","end":"2022-09-06T22:08:05.574Z","steps":["trace[1870070703] 'read index received'  (duration: 135.521616ms)","trace[1870070703] 'applied index is now lower than readState.Index'  (duration: 4.32µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-06T22:08:05.575Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"136.10069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-09-06T22:08:05.575Z","caller":"traceutil/trace.go:171","msg":"trace[1705442846] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:535; }","duration":"136.205665ms","start":"2022-09-06T22:08:05.439Z","end":"2022-09-06T22:08:05.575Z","steps":["trace[1705442846] 'agreement among raft nodes before linearized reading'  (duration: 135.593574ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-06T22:09:15.966Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:09:15.966Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220906150606-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/09/06 22:09:15 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:09:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:09:15.973Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-09-06T22:09:15.975Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:09:15.976Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:09:15.976Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-20220906150606-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:13:11 up 29 min,  0 users,  load average: 0.11, 0.40, 0.46
	Linux multinode-20220906150606-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [33a1b253bd37] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:09:26.027664       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:09:26.030143       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:09:26.055951       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [62ca7e8901de] <==
	* I0906 22:10:05.364584       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 22:10:05.363842       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0906 22:10:05.373668       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:10:05.373947       1 available_controller.go:491] Starting AvailableConditionController
	I0906 22:10:05.373974       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0906 22:10:05.374093       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:10:05.387416       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0906 22:10:05.424937       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0906 22:10:05.432993       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:10:05.463835       1 cache.go:39] Caches are synced for autoregister controller
	I0906 22:10:05.463892       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:10:05.464386       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:10:05.464454       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 22:10:05.464565       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:10:05.464734       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:10:05.475071       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 22:10:06.181282       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:10:06.366147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:10:08.017941       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:10:08.216752       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:10:08.224309       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:10:08.302513       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:10:08.307424       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:10:17.744726       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 22:10:17.753244       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [4c8a1f372186] <==
	* I0906 22:10:17.743352       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0906 22:10:17.745569       1 shared_informer.go:262] Caches are synced for ephemeral
	I0906 22:10:17.745575       1 shared_informer.go:262] Caches are synced for endpoint
	I0906 22:10:17.750055       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:10:17.810068       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:10:17.819089       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 22:10:17.819168       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 22:10:17.819216       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 22:10:17.819246       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 22:10:17.822030       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0906 22:10:17.846702       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:10:17.911133       1 shared_informer.go:262] Caches are synced for service account
	I0906 22:10:17.920994       1 shared_informer.go:262] Caches are synced for namespace
	I0906 22:10:18.260188       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:10:18.325585       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:10:18.325623       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:10:47.227208       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-qmjcf"
	W0906 22:10:50.236576       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m03 node
	W0906 22:10:50.313231       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m02" does not exist
	W0906 22:10:50.313545       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:10:50.317403       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m02 PodCIDR to [10.244.1.0/24]
	W0906 22:10:57.710141       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:10:57.710562       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220906150606-22187-m03 status is now: NodeNotReady"
	I0906 22:10:57.715292       1 event.go:294] "Event occurred" object="kube-system/kindnet-jkg8p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0906 22:10:57.720316       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-czbjx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [77d6030ab01b] <==
	* I0906 22:07:07.368858       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0906 22:07:33.664816       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m02" does not exist
	I0906 22:07:33.669205       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m02 PodCIDR to [10.244.1.0/24]
	I0906 22:07:33.672450       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cddz8"
	I0906 22:07:33.672579       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wnrrx"
	W0906 22:07:37.351496       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-20220906150606-22187-m02. Assuming now as a timestamp.
	I0906 22:07:37.351606       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220906150606-22187-m02 event: Registered Node multinode-20220906150606-22187-m02 in Controller"
	W0906 22:07:54.128515       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:07:56.877022       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-65db55d5d6 to 2"
	I0906 22:07:56.882874       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-ppptb"
	I0906 22:07:56.925317       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-trdqs"
	I0906 22:07:57.339133       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-ppptb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-ppptb"
	W0906 22:08:13.798133       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m03" does not exist
	W0906 22:08:13.798699       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:08:13.805120       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-czbjx"
	I0906 22:08:13.807565       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m03 PodCIDR to [10.244.2.0/24]
	I0906 22:08:13.808740       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jkg8p"
	W0906 22:08:17.322042       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-20220906150606-22187-m03. Assuming now as a timestamp.
	I0906 22:08:17.322223       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220906150606-22187-m03 event: Registered Node multinode-20220906150606-22187-m03 in Controller"
	W0906 22:08:23.872524       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m03 node
	W0906 22:09:01.233304       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	W0906 22:09:01.997978       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m03" does not exist
	W0906 22:09:01.998237       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:09:02.001295       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m03 PodCIDR to [10.244.3.0/24]
	W0906 22:09:12.196754       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m03 node
	
	* 
	* ==> kube-proxy [803ede092469] <==
	* I0906 22:10:08.212824       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0906 22:10:08.212891       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0906 22:10:08.212969       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:10:08.240697       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:10:08.240756       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:10:08.240763       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:10:08.240772       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:10:08.240792       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:10:08.240986       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:10:08.241179       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:10:08.241188       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:10:08.242676       1 config.go:317] "Starting service config controller"
	I0906 22:10:08.242713       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:10:08.242731       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:10:08.242734       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:10:08.243279       1 config.go:444] "Starting node config controller"
	I0906 22:10:08.243307       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:10:08.343321       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:10:08.343393       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:10:08.343690       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fdc326cd3c6a] <==
	* I0906 22:06:48.722271       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0906 22:06:48.722338       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0906 22:06:48.722372       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:06:48.740342       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:06:48.740384       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:06:48.740393       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:06:48.740400       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:06:48.740418       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:06:48.740665       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:06:48.740795       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:06:48.740863       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:06:48.742220       1 config.go:444] "Starting node config controller"
	I0906 22:06:48.742274       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:06:48.742832       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:06:48.742860       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:06:48.742893       1 config.go:317] "Starting service config controller"
	I0906 22:06:48.742923       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:06:48.843282       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:06:48.843336       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:06:48.843715       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [28bc9837a510] <==
	* W0906 22:06:31.783676       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:06:31.783686       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:06:31.783688       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:06:31.783752       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:06:31.783765       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:06:31.783744       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:06:32.604439       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:06:32.604528       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:06:32.701095       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:06:32.701126       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 22:06:32.757049       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 22:06:32.757139       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 22:06:32.764158       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 22:06:32.764402       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:06:32.769648       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 22:06:32.769720       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 22:06:32.788540       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:06:32.788625       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:06:32.838638       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:06:32.838675       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:06:32.895204       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:06:32.895240       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0906 22:06:36.075164       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 22:09:15.970723       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0906 22:09:15.970770       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [ef78db90e1cf] <==
	* I0906 22:10:03.522056       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:10:05.430779       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:10:05.431382       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:10:05.431579       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:10:05.431687       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:10:05.444404       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:10:05.444441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:10:05.445894       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:10:05.445934       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:10:05.446107       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:10:05.448757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:10:05.546127       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:09:53 UTC, end at Tue 2022-09-06 22:13:12 UTC. --
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.346182    1165 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.346263    1165 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.346293    1165 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397763    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d3ced34-e06b-4586-8c69-2f495e1290dd-config-volume\") pod \"coredns-565d847f94-t6l66\" (UID: \"3d3ced34-e06b-4586-8c69-2f495e1290dd\") " pod="kube-system/coredns-565d847f94-t6l66"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397819    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2tp\" (UniqueName: \"kubernetes.io/projected/3d3ced34-e06b-4586-8c69-2f495e1290dd-kube-api-access-xb2tp\") pod \"coredns-565d847f94-t6l66\" (UID: \"3d3ced34-e06b-4586-8c69-2f495e1290dd\") " pod="kube-system/coredns-565d847f94-t6l66"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397841    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmbb\" (UniqueName: \"kubernetes.io/projected/cf24b814-e576-465e-9c3e-f8c04c05c695-kube-api-access-mgmbb\") pod \"storage-provisioner\" (UID: \"cf24b814-e576-465e-9c3e-f8c04c05c695\") " pod="kube-system/storage-provisioner"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397860    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdlc\" (UniqueName: \"kubernetes.io/projected/1cf732b5-70cb-44d1-acf9-34a0abad6541-kube-api-access-jmdlc\") pod \"busybox-65db55d5d6-trdqs\" (UID: \"1cf732b5-70cb-44d1-acf9-34a0abad6541\") " pod="default/busybox-65db55d5d6-trdqs"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397875    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf24b814-e576-465e-9c3e-f8c04c05c695-tmp\") pod \"storage-provisioner\" (UID: \"cf24b814-e576-465e-9c3e-f8c04c05c695\") " pod="kube-system/storage-provisioner"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397894    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7nvs\" (UniqueName: \"kubernetes.io/projected/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-api-access-t7nvs\") pod \"kube-proxy-kkmpm\" (UID: \"0b228e9a-6577-46a3-b848-9c9fca602ba6\") " pod="kube-system/kube-proxy-kkmpm"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397910    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b228e9a-6577-46a3-b848-9c9fca602ba6-lib-modules\") pod \"kube-proxy-kkmpm\" (UID: \"0b228e9a-6577-46a3-b848-9c9fca602ba6\") " pod="kube-system/kube-proxy-kkmpm"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397923    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bae0c657-7cfe-416f-bbcd-b3d229bd137a-cni-cfg\") pod \"kindnet-nh9r5\" (UID: \"bae0c657-7cfe-416f-bbcd-b3d229bd137a\") " pod="kube-system/kindnet-nh9r5"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397940    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bae0c657-7cfe-416f-bbcd-b3d229bd137a-xtables-lock\") pod \"kindnet-nh9r5\" (UID: \"bae0c657-7cfe-416f-bbcd-b3d229bd137a\") " pod="kube-system/kindnet-nh9r5"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.397953    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlqk\" (UniqueName: \"kubernetes.io/projected/bae0c657-7cfe-416f-bbcd-b3d229bd137a-kube-api-access-lvlqk\") pod \"kindnet-nh9r5\" (UID: \"bae0c657-7cfe-416f-bbcd-b3d229bd137a\") " pod="kube-system/kindnet-nh9r5"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.398023    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-proxy\") pod \"kube-proxy-kkmpm\" (UID: \"0b228e9a-6577-46a3-b848-9c9fca602ba6\") " pod="kube-system/kube-proxy-kkmpm"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.398048    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b228e9a-6577-46a3-b848-9c9fca602ba6-xtables-lock\") pod \"kube-proxy-kkmpm\" (UID: \"0b228e9a-6577-46a3-b848-9c9fca602ba6\") " pod="kube-system/kube-proxy-kkmpm"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.398074    1165 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bae0c657-7cfe-416f-bbcd-b3d229bd137a-lib-modules\") pod \"kindnet-nh9r5\" (UID: \"bae0c657-7cfe-416f-bbcd-b3d229bd137a\") " pod="kube-system/kindnet-nh9r5"
	Sep 06 22:10:06 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:06.398087    1165 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:10:07 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:07.520324    1165 request.go:601] Waited for 1.040647834s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Sep 06 22:10:07 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:07.918956    1165 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0f037fd738e3ba4c6e03c37f876d80869b10ce9c3c20ac3a929eb7e1a75181fe"
	Sep 06 22:10:08 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:08.965461    1165 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 22:10:10 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:10.136569    1165 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 22:10:38 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:38.173685    1165 scope.go:115] "RemoveContainer" containerID="1ed0dda0b42ecab62b679f37177cc5411c9e31684903faa29c56d099f1617738"
	Sep 06 22:10:38 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:38.173928    1165 scope.go:115] "RemoveContainer" containerID="11d34d183821ea54724b06cc0209a8a7d4bde061bfd1a789951bbdb6da272f01"
	Sep 06 22:10:38 multinode-20220906150606-22187 kubelet[1165]: E0906 22:10:38.174045    1165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cf24b814-e576-465e-9c3e-f8c04c05c695)\"" pod="kube-system/storage-provisioner" podUID=cf24b814-e576-465e-9c3e-f8c04c05c695
	Sep 06 22:10:49 multinode-20220906150606-22187 kubelet[1165]: I0906 22:10:49.423062    1165 scope.go:115] "RemoveContainer" containerID="11d34d183821ea54724b06cc0209a8a7d4bde061bfd1a789951bbdb6da272f01"
	
	* 
	* ==> storage-provisioner [11d34d183821] <==
	* I0906 22:10:07.338127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0906 22:10:37.320831       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [167b4a4f3306] <==
	* I0906 22:10:49.510785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:10:49.517614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:10:49.517659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:11:06.911287       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:11:06.911361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0711daa2-101b-4a50-9513-72f9a901e5c3", APIVersion:"v1", ResourceVersion:"900", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220906150606-22187_200cc0fa-3072-4355-9e10-6fff6c61daf6 became leader
	I0906 22:11:06.911387       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220906150606-22187_200cc0fa-3072-4355-9e10-6fff6c61daf6!
	I0906 22:11:07.011652       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220906150606-22187_200cc0fa-3072-4355-9e10-6fff6c61daf6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-20220906150606-22187 -n multinode-20220906150606-22187
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220906150606-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-qmjcf
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220906150606-22187 describe pod busybox-65db55d5d6-qmjcf
helpers_test.go:280: (dbg) kubectl --context multinode-20220906150606-22187 describe pod busybox-65db55d5d6-qmjcf:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-qmjcf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-20220906150606-22187-m03/
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghcng (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-ghcng:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m26s  default-scheduler  Successfully assigned default/busybox-65db55d5d6-qmjcf to multinode-20220906150606-22187-m03

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (238.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (209.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true -v=8 --alsologtostderr --driver=docker 
E0906 15:14:04.317309   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (3m24.257333328s)

                                                
                                                
-- stdout --
	* [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220906150606-22187" ...
	* Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:13:47.095685   29027 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:13:47.095920   29027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:47.095925   29027 out.go:309] Setting ErrFile to fd 2...
	I0906 15:13:47.095929   29027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:47.096053   29027 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:13:47.096485   29027 out.go:303] Setting JSON to false
	I0906 15:13:47.111360   29027 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7998,"bootTime":1662494429,"procs":341,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:13:47.111459   29027 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:13:47.133023   29027 out.go:177] * [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:13:47.177166   29027 notify.go:193] Checking for updates...
	I0906 15:13:47.198517   29027 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:13:47.219906   29027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:47.241034   29027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:13:47.262917   29027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:13:47.285064   29027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:13:47.307506   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:13:47.308140   29027 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:13:47.376045   29027 docker.go:137] docker version: linux-20.10.17
	I0906 15:13:47.376161   29027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:13:47.505530   29027 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:13:47.440188995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:13:47.549224   29027 out.go:177] * Using the docker driver based on existing profile
	I0906 15:13:47.570151   29027 start.go:284] selected driver: docker
	I0906 15:13:47.570171   29027 start.go:808] validating driver "docker" against &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:47.570305   29027 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:13:47.570446   29027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:13:47.698814   29027 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:13:47.634507209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:13:47.700850   29027 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:13:47.700876   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:13:47.700884   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:13:47.700898   29027 start_flags.go:310] config:
	{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false re
gistry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:47.722711   29027 out.go:177] * Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	I0906 15:13:47.765800   29027 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:13:47.787499   29027 out.go:177] * Pulling base image ...
	I0906 15:13:47.831038   29027 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:13:47.831062   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:13:47.831139   29027 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:13:47.831161   29027 cache.go:57] Caching tarball of preloaded images
	I0906 15:13:47.831935   29027 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:13:47.832130   29027 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:13:47.832517   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:13:47.894503   29027 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:13:47.894519   29027 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:13:47.894528   29027 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:13:47.894564   29027 start.go:364] acquiring machines lock for multinode-20220906150606-22187: {Name:mk1f646be94138ec52cb695dba30aa00d55e22df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:13:47.894639   29027 start.go:368] acquired machines lock for "multinode-20220906150606-22187" in 55.567µs
	I0906 15:13:47.894657   29027 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:13:47.894668   29027 fix.go:55] fixHost starting: 
	I0906 15:13:47.894924   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:13:47.957408   29027 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187: state=Stopped err=<nil>
	W0906 15:13:47.957439   29027 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:13:48.001523   29027 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187" ...
	I0906 15:13:48.029853   29027 cli_runner.go:164] Run: docker start multinode-20220906150606-22187
	I0906 15:13:48.361102   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:13:48.425826   29027 kic.go:415] container "multinode-20220906150606-22187" state is running.
	I0906 15:13:48.426466   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:48.491495   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:13:48.491891   29027 machine.go:88] provisioning docker machine ...
	I0906 15:13:48.491915   29027 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187"
	I0906 15:13:48.491973   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:48.558160   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:48.558370   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:48.558383   29027 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187 && echo "multinode-20220906150606-22187" | sudo tee /etc/hostname
	I0906 15:13:48.680446   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187
	
	I0906 15:13:48.680539   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:48.743904   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:48.744058   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:48.744072   29027 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:13:48.853817   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:13:48.853835   29027 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:13:48.853855   29027 ubuntu.go:177] setting up certificates
	I0906 15:13:48.853865   29027 provision.go:83] configureAuth start
	I0906 15:13:48.853930   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:48.919529   29027 provision.go:138] copyHostCerts
	I0906 15:13:48.919578   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:13:48.919647   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:13:48.919659   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:13:48.919763   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:13:48.919932   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:13:48.919965   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:13:48.919969   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:13:48.920055   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:13:48.920752   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:13:48.920870   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:13:48.920877   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:13:48.920974   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:13:48.921152   29027 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187]
	I0906 15:13:49.220973   29027 provision.go:172] copyRemoteCerts
	I0906 15:13:49.221038   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:13:49.221086   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.285942   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:49.367849   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:13:49.367933   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:13:49.386455   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:13:49.386527   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:13:49.403267   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:13:49.403334   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:13:49.419766   29027 provision.go:86] duration metric: configureAuth took 565.884308ms
	I0906 15:13:49.419779   29027 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:13:49.419962   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:13:49.420018   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.483049   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.483249   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.483260   29027 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:13:49.595353   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:13:49.595366   29027 ubuntu.go:71] root file system type: overlay
	I0906 15:13:49.595502   29027 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:13:49.595571   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.658210   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.658397   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.658444   29027 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:13:49.783435   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:13:49.783514   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.845990   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.846143   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.846157   29027 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:13:49.965444   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:13:49.965463   29027 machine.go:91] provisioned docker machine in 1.473558658s
	I0906 15:13:49.965472   29027 start.go:300] post-start starting for "multinode-20220906150606-22187" (driver="docker")
	I0906 15:13:49.965478   29027 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:13:49.965540   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:13:49.965593   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.028931   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.110988   29027 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:13:50.114281   29027 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:13:50.114291   29027 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:13:50.114295   29027 command_runner.go:130] > ID=ubuntu
	I0906 15:13:50.114301   29027 command_runner.go:130] > ID_LIKE=debian
	I0906 15:13:50.114307   29027 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:13:50.114310   29027 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:13:50.114319   29027 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:13:50.114323   29027 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:13:50.114329   29027 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:13:50.114339   29027 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:13:50.114351   29027 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:13:50.114361   29027 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:13:50.114441   29027 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:13:50.114454   29027 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:13:50.114482   29027 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:13:50.114493   29027 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:13:50.114502   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:13:50.114610   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:13:50.114757   29027 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:13:50.114763   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:13:50.114906   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:13:50.121433   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:13:50.137804   29027 start.go:303] post-start completed in 172.319981ms
	I0906 15:13:50.137874   29027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:13:50.137923   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.201243   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.282164   29027 command_runner.go:130] > 11%
	I0906 15:13:50.282237   29027 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:13:50.286159   29027 command_runner.go:130] > 50G
	I0906 15:13:50.286417   29027 fix.go:57] fixHost completed within 2.391743544s
	I0906 15:13:50.286429   29027 start.go:83] releasing machines lock for "multinode-20220906150606-22187", held for 2.39177537s
	I0906 15:13:50.286515   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:50.349570   29027 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:13:50.349576   29027 ssh_runner.go:195] Run: systemctl --version
	I0906 15:13:50.349684   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.349707   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.416609   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.416976   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.547839   29027 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:13:50.547876   29027 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0906 15:13:50.547901   29027 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0906 15:13:50.548029   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:13:50.554938   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:13:50.566868   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:50.629432   29027 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:13:50.710880   29027 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:13:50.720066   29027 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:13:50.720359   29027 command_runner.go:130] > [Unit]
	I0906 15:13:50.720371   29027 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:13:50.720378   29027 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:13:50.720391   29027 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:13:50.720403   29027 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:13:50.720412   29027 command_runner.go:130] > Wants=network-online.target
	I0906 15:13:50.720425   29027 command_runner.go:130] > Requires=docker.socket
	I0906 15:13:50.720429   29027 command_runner.go:130] > StartLimitBurst=3
	I0906 15:13:50.720433   29027 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:13:50.720436   29027 command_runner.go:130] > [Service]
	I0906 15:13:50.720439   29027 command_runner.go:130] > Type=notify
	I0906 15:13:50.720442   29027 command_runner.go:130] > Restart=on-failure
	I0906 15:13:50.720448   29027 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:13:50.720456   29027 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:13:50.720462   29027 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:13:50.720468   29027 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:13:50.720473   29027 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:13:50.720479   29027 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:13:50.720485   29027 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:13:50.720492   29027 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:13:50.720500   29027 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:13:50.720510   29027 command_runner.go:130] > ExecStart=
	I0906 15:13:50.720522   29027 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:13:50.720527   29027 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:13:50.720533   29027 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:13:50.720538   29027 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:13:50.720542   29027 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:13:50.720545   29027 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:13:50.720550   29027 command_runner.go:130] > LimitCORE=infinity
	I0906 15:13:50.720555   29027 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:13:50.720559   29027 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:13:50.720562   29027 command_runner.go:130] > TasksMax=infinity
	I0906 15:13:50.720567   29027 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:13:50.720572   29027 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:13:50.720575   29027 command_runner.go:130] > Delegate=yes
	I0906 15:13:50.720580   29027 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:13:50.720584   29027 command_runner.go:130] > KillMode=process
	I0906 15:13:50.720590   29027 command_runner.go:130] > [Install]
	I0906 15:13:50.720594   29027 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:13:50.720923   29027 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:13:50.720976   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:13:50.730262   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:13:50.742550   29027 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:13:50.742561   29027 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:13:50.743677   29027 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:13:50.809868   29027 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:13:50.875192   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:50.937284   29027 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:13:51.189454   29027 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:13:51.260433   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:51.323939   29027 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:13:51.333104   29027 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:13:51.333168   29027 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:13:51.336859   29027 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:13:51.336870   29027 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:13:51.336877   29027 command_runner.go:130] > Device: 96h/150d	Inode: 115         Links: 1
	I0906 15:13:51.336885   29027 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:13:51.336891   29027 command_runner.go:130] > Access: 2022-09-06 22:13:50.646119795 +0000
	I0906 15:13:51.336896   29027 command_runner.go:130] > Modify: 2022-09-06 22:13:50.646119795 +0000
	I0906 15:13:51.336903   29027 command_runner.go:130] > Change: 2022-09-06 22:13:50.647119795 +0000
	I0906 15:13:51.336907   29027 command_runner.go:130] >  Birth: -
	I0906 15:13:51.337020   29027 start.go:471] Will wait 60s for crictl version
	I0906 15:13:51.337077   29027 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:13:51.364190   29027 command_runner.go:130] > Version:  0.1.0
	I0906 15:13:51.364337   29027 command_runner.go:130] > RuntimeName:  docker
	I0906 15:13:51.364344   29027 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:13:51.364472   29027 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:13:51.366993   29027 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:13:51.367064   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:13:51.397823   29027 command_runner.go:130] > 20.10.17
	I0906 15:13:51.400874   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:13:51.433433   29027 command_runner.go:130] > 20.10.17
	I0906 15:13:51.480937   29027 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:13:51.481158   29027 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187 dig +short host.docker.internal
	I0906 15:13:51.598181   29027 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:13:51.598330   29027 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:13:51.602397   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:13:51.611602   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:51.674806   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:13:51.674878   29027 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:13:51.701157   29027 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:13:51.701169   29027 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:13:51.701174   29027 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:13:51.701181   29027 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:13:51.701193   29027 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:13:51.701200   29027 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:13:51.701203   29027 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:13:51.701211   29027 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:13:51.701217   29027 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:13:51.701221   29027 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:13:51.701225   29027 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:13:51.704032   29027 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:13:51.704051   29027 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:13:51.704128   29027 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:13:51.730200   29027 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:13:51.730211   29027 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:13:51.730215   29027 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:13:51.730223   29027 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:13:51.730228   29027 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:13:51.730232   29027 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:13:51.730236   29027 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:13:51.730240   29027 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:13:51.730243   29027 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:13:51.730248   29027 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:13:51.730253   29027 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:13:51.733586   29027 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:13:51.733607   29027 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:13:51.733693   29027 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:13:51.803278   29027 command_runner.go:130] > systemd
	I0906 15:13:51.806890   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:13:51.806902   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:13:51.806921   29027 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:13:51.806934   29027 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:13:51.807040   29027 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:13:51.807114   29027 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:13:51.807170   29027 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:13:51.813790   29027 command_runner.go:130] > kubeadm
	I0906 15:13:51.813800   29027 command_runner.go:130] > kubectl
	I0906 15:13:51.813803   29027 command_runner.go:130] > kubelet
	I0906 15:13:51.814432   29027 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:13:51.814527   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:13:51.821382   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0906 15:13:51.833209   29027 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:13:51.845195   29027 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0906 15:13:51.857186   29027 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:13:51.860715   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:13:51.869878   29027 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.2
	I0906 15:13:51.869982   29027 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:13:51.870031   29027 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:13:51.870126   29027 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.key
	I0906 15:13:51.870187   29027 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key.cee25041
	I0906 15:13:51.870237   29027 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key
	I0906 15:13:51.870244   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 15:13:51.870287   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 15:13:51.870336   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 15:13:51.870363   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 15:13:51.870383   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:13:51.870399   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:13:51.870415   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:13:51.870429   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:13:51.870545   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:13:51.870582   29027 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:13:51.870592   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:13:51.870625   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:13:51.870657   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:13:51.870684   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:13:51.870752   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:13:51.870784   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:51.870805   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:13:51.870821   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:13:51.871321   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:13:51.887724   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:13:51.904266   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:13:51.920215   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:13:51.936427   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:13:51.952340   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:13:51.968656   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:13:51.985074   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:13:52.000954   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:13:52.017880   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:13:52.034083   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:13:52.050447   29027 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:13:52.062561   29027 ssh_runner.go:195] Run: openssl version
	I0906 15:13:52.067337   29027 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:13:52.067693   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:13:52.075788   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079463   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079672   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079715   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.084419   29027 command_runner.go:130] > b5213941
	I0906 15:13:52.084606   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:13:52.091387   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:13:52.098819   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123160   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123342   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123386   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.128156   29027 command_runner.go:130] > 51391683
	I0906 15:13:52.128441   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:13:52.135351   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:13:52.142955   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146637   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146751   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146791   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.151309   29027 command_runner.go:130] > 3ec20f2e
	I0906 15:13:52.151687   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:13:52.158793   29027 kubeadm.go:396] StartCluster: {Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false
pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:52.158911   29027 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:13:52.187291   29027 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:13:52.194383   29027 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 15:13:52.194397   29027 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 15:13:52.194408   29027 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 15:13:52.194415   29027 command_runner.go:130] > member
	I0906 15:13:52.194939   29027 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:13:52.194955   29027 kubeadm.go:627] restartCluster start
	I0906 15:13:52.194998   29027 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:13:52.201707   29027 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.201762   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:52.264897   29027 kubeconfig.go:116] verify returned: extract IP: "multinode-20220906150606-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:52.264986   29027 kubeconfig.go:127] "multinode-20220906150606-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:13:52.265325   29027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:13:52.265818   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:52.266007   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:13:52.266314   29027 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 15:13:52.266483   29027 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:13:52.273980   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.274042   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.281997   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.482091   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.482164   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.491685   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.683241   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.683315   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.692824   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.883598   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.883667   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.893419   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.084051   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.084172   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.093622   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.282125   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.282229   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.292313   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.482484   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.482647   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.492680   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.682947   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.683091   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.692474   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.882091   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.882186   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.892124   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.084054   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.084154   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.093444   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.284139   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.284242   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.294417   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.483389   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.483495   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.493549   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.684140   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.684275   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.694746   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.884124   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.884258   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.894796   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.084123   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.084272   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.094900   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.284217   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.284354   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.294412   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.294422   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.294464   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.302400   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.302411   29027 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:13:55.302420   29027 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:13:55.302480   29027 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:13:55.333614   29027 command_runner.go:130] > 167b4a4f3306
	I0906 15:13:55.333630   29027 command_runner.go:130] > 06ab6cf627e8
	I0906 15:13:55.333633   29027 command_runner.go:130] > 803ede092469
	I0906 15:13:55.333636   29027 command_runner.go:130] > e266c748731b
	I0906 15:13:55.333640   29027 command_runner.go:130] > c1eee0e53b49
	I0906 15:13:55.333653   29027 command_runner.go:130] > af277a5518c6
	I0906 15:13:55.333656   29027 command_runner.go:130] > 11d34d183821
	I0906 15:13:55.333660   29027 command_runner.go:130] > 4f1337150041
	I0906 15:13:55.333664   29027 command_runner.go:130] > 7596442e53b5
	I0906 15:13:55.333670   29027 command_runner.go:130] > 4c8a1f372186
	I0906 15:13:55.333673   29027 command_runner.go:130] > 3c8f51d8691c
	I0906 15:13:55.333678   29027 command_runner.go:130] > ef78db90e1cf
	I0906 15:13:55.333681   29027 command_runner.go:130] > 62ca7e8901de
	I0906 15:13:55.333685   29027 command_runner.go:130] > 9456ca1d4c44
	I0906 15:13:55.333688   29027 command_runner.go:130] > 8cecea8208ec
	I0906 15:13:55.333691   29027 command_runner.go:130] > c20d3976c12a
	I0906 15:13:55.333696   29027 command_runner.go:130] > 22c8f9d46178
	I0906 15:13:55.333700   29027 command_runner.go:130] > df0852bc7a51
	I0906 15:13:55.333704   29027 command_runner.go:130] > a34f733a43c2
	I0906 15:13:55.333708   29027 command_runner.go:130] > 3c2093315054
	I0906 15:13:55.333714   29027 command_runner.go:130] > fdc326cd3c6a
	I0906 15:13:55.333717   29027 command_runner.go:130] > 4e3670b1600d
	I0906 15:13:55.333721   29027 command_runner.go:130] > 6bd8b364f108
	I0906 15:13:55.333724   29027 command_runner.go:130] > 6d68f544bf54
	I0906 15:13:55.333728   29027 command_runner.go:130] > a165f2074320
	I0906 15:13:55.333732   29027 command_runner.go:130] > 28bc9837a510
	I0906 15:13:55.333741   29027 command_runner.go:130] > 33a1b253bd37
	I0906 15:13:55.333745   29027 command_runner.go:130] > 0c0974b47f92
	I0906 15:13:55.333748   29027 command_runner.go:130] > c27dff0f48e6
	I0906 15:13:55.333752   29027 command_runner.go:130] > 77d6030ab01b
	I0906 15:13:55.333755   29027 command_runner.go:130] > defb450e84c2
	I0906 15:13:55.336896   29027 docker.go:443] Stopping containers: [167b4a4f3306 06ab6cf627e8 803ede092469 e266c748731b c1eee0e53b49 af277a5518c6 11d34d183821 4f1337150041 7596442e53b5 4c8a1f372186 3c8f51d8691c ef78db90e1cf 62ca7e8901de 9456ca1d4c44 8cecea8208ec c20d3976c12a 22c8f9d46178 df0852bc7a51 a34f733a43c2 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2]
	I0906 15:13:55.336981   29027 ssh_runner.go:195] Run: docker stop 167b4a4f3306 06ab6cf627e8 803ede092469 e266c748731b c1eee0e53b49 af277a5518c6 11d34d183821 4f1337150041 7596442e53b5 4c8a1f372186 3c8f51d8691c ef78db90e1cf 62ca7e8901de 9456ca1d4c44 8cecea8208ec c20d3976c12a 22c8f9d46178 df0852bc7a51 a34f733a43c2 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2
	I0906 15:13:55.364165   29027 command_runner.go:130] > 167b4a4f3306
	I0906 15:13:55.364450   29027 command_runner.go:130] > 06ab6cf627e8
	I0906 15:13:55.364458   29027 command_runner.go:130] > 803ede092469
	I0906 15:13:55.364461   29027 command_runner.go:130] > e266c748731b
	I0906 15:13:55.364465   29027 command_runner.go:130] > c1eee0e53b49
	I0906 15:13:55.364468   29027 command_runner.go:130] > af277a5518c6
	I0906 15:13:55.364471   29027 command_runner.go:130] > 11d34d183821
	I0906 15:13:55.364475   29027 command_runner.go:130] > 4f1337150041
	I0906 15:13:55.364479   29027 command_runner.go:130] > 7596442e53b5
	I0906 15:13:55.364482   29027 command_runner.go:130] > 4c8a1f372186
	I0906 15:13:55.364486   29027 command_runner.go:130] > 3c8f51d8691c
	I0906 15:13:55.364492   29027 command_runner.go:130] > ef78db90e1cf
	I0906 15:13:55.364495   29027 command_runner.go:130] > 62ca7e8901de
	I0906 15:13:55.364504   29027 command_runner.go:130] > 9456ca1d4c44
	I0906 15:13:55.364510   29027 command_runner.go:130] > 8cecea8208ec
	I0906 15:13:55.364515   29027 command_runner.go:130] > c20d3976c12a
	I0906 15:13:55.364519   29027 command_runner.go:130] > 22c8f9d46178
	I0906 15:13:55.364522   29027 command_runner.go:130] > df0852bc7a51
	I0906 15:13:55.364525   29027 command_runner.go:130] > a34f733a43c2
	I0906 15:13:55.364531   29027 command_runner.go:130] > 3c2093315054
	I0906 15:13:55.364537   29027 command_runner.go:130] > fdc326cd3c6a
	I0906 15:13:55.364540   29027 command_runner.go:130] > 4e3670b1600d
	I0906 15:13:55.364544   29027 command_runner.go:130] > 6bd8b364f108
	I0906 15:13:55.364547   29027 command_runner.go:130] > 6d68f544bf54
	I0906 15:13:55.364978   29027 command_runner.go:130] > a165f2074320
	I0906 15:13:55.364986   29027 command_runner.go:130] > 28bc9837a510
	I0906 15:13:55.364990   29027 command_runner.go:130] > 33a1b253bd37
	I0906 15:13:55.364995   29027 command_runner.go:130] > 0c0974b47f92
	I0906 15:13:55.364999   29027 command_runner.go:130] > c27dff0f48e6
	I0906 15:13:55.365004   29027 command_runner.go:130] > 77d6030ab01b
	I0906 15:13:55.365007   29027 command_runner.go:130] > defb450e84c2
	I0906 15:13:55.368221   29027 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:13:55.378263   29027 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:13:55.384917   29027 command_runner.go:130] > -rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	I0906 15:13:55.384927   29027 command_runner.go:130] > -rw------- 1 root root 5656 Sep  6 22:10 /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.384936   29027 command_runner.go:130] > -rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	I0906 15:13:55.384959   29027 command_runner.go:130] > -rw------- 1 root root 5604 Sep  6 22:10 /etc/kubernetes/scheduler.conf
	I0906 15:13:55.385523   29027 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:10 /etc/kubernetes/scheduler.conf
	
	I0906 15:13:55.385577   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:13:55.392403   29027 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:13:55.393074   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:13:55.399259   29027 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:13:55.399913   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.406715   29027 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.406761   29027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.413337   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:13:55.420558   29027 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.420599   29027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:13:55.427219   29027 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:13:55.434385   29027 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:13:55.434398   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:55.474051   29027 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:13:55.474063   29027 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 15:13:55.474306   29027 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 15:13:55.474317   29027 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:13:55.474530   29027 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 15:13:55.474538   29027 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:13:55.474888   29027 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 15:13:55.474903   29027 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 15:13:55.475057   29027 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:13:55.475465   29027 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:13:55.475573   29027 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:13:55.475580   29027 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 15:13:55.478725   29027 command_runner.go:130] ! W0906 22:13:55.482272    1138 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:55.478756   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:55.519887   29027 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:13:56.065961   29027 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0906 15:13:56.233494   29027 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0906 15:13:56.423102   29027 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:13:56.548408   29027 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:13:56.552135   29027 command_runner.go:130] ! W0906 22:13:55.528641    1147 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.552153   29027 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073381262s)
	I0906 15:13:56.552165   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.600132   29027 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:13:56.600874   29027 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:13:56.601026   29027 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 15:13:56.673420   29027 command_runner.go:130] ! W0906 22:13:56.599919    1169 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.673439   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.716258   29027 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:13:56.716276   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:13:56.718143   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:13:56.719057   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:13:56.725026   29027 command_runner.go:130] ! W0906 22:13:56.725055    1203 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.725048   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.778974   29027 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:13:56.789205   29027 command_runner.go:130] ! W0906 22:13:56.786358    1217 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.789244   29027 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:13:56.789298   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.342880   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.843525   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.852435   29027 command_runner.go:130] > 1605
	I0906 15:13:57.853425   29027 api_server.go:71] duration metric: took 1.064190026s to wait for apiserver process to appear ...
	I0906 15:13:57.853437   29027 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:13:57.853448   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:01.705131   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:14:01.705147   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:14:02.205231   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:02.211661   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:14:02.211675   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:14:02.705671   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:02.711757   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:14:02.711779   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:14:03.206152   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:03.214466   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 200:
	ok
	I0906 15:14:03.214521   29027 round_trippers.go:463] GET https://127.0.0.1:57276/version
	I0906 15:14:03.214526   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.214534   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.214540   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.221095   29027 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:14:03.221107   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.221114   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.221122   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.221127   29027 round_trippers.go:580]     Content-Length: 261
	I0906 15:14:03.221132   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.221138   29027 round_trippers.go:580]     Audit-Id: 394126c1-447e-4f2c-b3b9-ac7650fc2135
	I0906 15:14:03.221144   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.221149   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.221170   29027 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:14:03.221218   29027 api_server.go:140] control plane version: v1.25.0
	I0906 15:14:03.221226   29027 api_server.go:130] duration metric: took 5.367765183s to wait for apiserver health ...
	I0906 15:14:03.221237   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:14:03.221243   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:14:03.242928   29027 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 15:14:03.279764   29027 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 15:14:03.285064   29027 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 15:14:03.285078   29027 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0906 15:14:03.285083   29027 command_runner.go:130] > Device: 8eh/142d	Inode: 267134      Links: 1
	I0906 15:14:03.285088   29027 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 15:14:03.285103   29027 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:14:03.285110   29027 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:14:03.285116   29027 command_runner.go:130] > Change: 2022-09-06 21:44:51.197359839 +0000
	I0906 15:14:03.285123   29027 command_runner.go:130] >  Birth: -
	I0906 15:14:03.285202   29027 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.0/kubectl ...
	I0906 15:14:03.285211   29027 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0906 15:14:03.297958   29027 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 15:14:03.802620   29027 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:14:03.804391   29027 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:14:03.806424   29027 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 15:14:03.841320   29027 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 15:14:03.848638   29027 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:14:03.848700   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:03.848705   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.848711   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.848718   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.854350   29027 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:14:03.854372   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.854379   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.854385   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.854390   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.854398   29027 round_trippers.go:580]     Audit-Id: d43264b1-eef2-4164-ae3a-dec4b356f994
	I0906 15:14:03.854407   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.854424   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.855456   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1056"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86013 chars]
	I0906 15:14:03.858422   29027 system_pods.go:59] 12 kube-system pods found
	I0906 15:14:03.858438   29027 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:03.858446   29027 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:14:03.858453   29027 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:03.858456   29027 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:03.858460   29027 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:03.858464   29027 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:03.858468   29027 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:03.858473   29027 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:03.858482   29027 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:03.858486   29027 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:03.858490   29027 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:03.858494   29027 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running
	I0906 15:14:03.858497   29027 system_pods.go:74] duration metric: took 9.849597ms to wait for pod list to return data ...
	I0906 15:14:03.858503   29027 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:14:03.858540   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:03.858544   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.858549   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.858555   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.861168   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:03.861178   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.861183   29027 round_trippers.go:580]     Audit-Id: e2293256-7675-4afa-a553-5718bf29a84f
	I0906 15:14:03.861188   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.861196   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.861202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.861206   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.861211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.861384   29027 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1056"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller
-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet"," [truncated 10244 chars]
	I0906 15:14:03.861868   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:03.861880   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:03.861890   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:03.861895   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:03.861900   29027 node_conditions.go:105] duration metric: took 3.393101ms to run NodePressure ...
	I0906 15:14:03.861911   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:14:04.044137   29027 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 15:14:04.162178   29027 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 15:14:04.167024   29027 command_runner.go:130] ! W0906 22:14:03.936156    2008 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:14:04.167051   29027 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:14:04.167118   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0906 15:14:04.167124   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.167130   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.167137   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.170493   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:04.170511   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.170519   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.170527   29027 round_trippers.go:580]     Audit-Id: 1b6ca87a-be5e-49eb-bdd3-214ed5730a44
	I0906 15:14:04.170535   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.170542   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.170548   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.170558   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.171594   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1059"},"items":[{"metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.ad [truncated 30814 chars]
	I0906 15:14:04.172338   29027 kubeadm.go:778] kubelet initialised
	I0906 15:14:04.172348   29027 kubeadm.go:779] duration metric: took 5.289193ms waiting for restarted kubelet to initialise ...
	I0906 15:14:04.172357   29027 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:04.172395   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:04.172400   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.172406   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.172411   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.176893   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:04.176909   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.176917   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.176923   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.176930   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.176936   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.176943   29027 round_trippers.go:580]     Audit-Id: 15c57f50-e47e-453d-a3bb-c43aaea62e45
	I0906 15:14:04.176950   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.178636   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1059"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86013 chars]
	I0906 15:14:04.180813   29027 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.180880   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:04.180885   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.180903   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.180912   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.183296   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.183310   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.183317   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.183324   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.183332   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.183339   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.183348   29027 round_trippers.go:580]     Audit-Id: 81dc3ec3-9a23-48d7-a9ff-df452ef2b16e
	I0906 15:14:04.183355   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.183440   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6564 chars]
	I0906 15:14:04.183798   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.183805   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.183811   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.183817   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.185944   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.185958   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.185965   29027 round_trippers.go:580]     Audit-Id: df875065-edc3-4abf-889a-b6d91ad53a97
	I0906 15:14:04.185972   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.185980   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.185986   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.185991   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.185995   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.186060   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:04.186278   29027 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:04.186285   29027 pod_ready.go:81] duration metric: took 5.458524ms waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.186293   29027 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.186324   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:04.186329   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.186336   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.186344   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.188301   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:04.188312   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.188317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.188322   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.188326   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.188332   29027 round_trippers.go:580]     Audit-Id: 25e0406c-e426-4733-baa2-1347852a18cb
	I0906 15:14:04.188337   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.188342   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.188396   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:04.188640   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.188647   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.188653   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.188657   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.232318   29027 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0906 15:14:04.232390   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.232416   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.232430   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.232441   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.232456   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.232465   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.232474   29027 round_trippers.go:580]     Audit-Id: a7a1d783-f406-4965-a8f0-c5aae1590591
	I0906 15:14:04.232654   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:04.733499   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:04.733511   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.733517   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.733522   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.736552   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.736567   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.736576   29027 round_trippers.go:580]     Audit-Id: 8550cc60-71fb-4b60-9ba7-d34340cfd598
	I0906 15:14:04.736583   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.736590   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.736596   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.736602   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.736609   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.736692   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:04.736978   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.736991   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.736999   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.737005   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.739229   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.739241   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.739250   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.739258   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.739265   29027 round_trippers.go:580]     Audit-Id: 1ced4632-2232-4cf5-9462-bfbe835be8dc
	I0906 15:14:04.739272   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.739281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.739288   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.739372   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:05.233285   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:05.233296   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.233303   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.233309   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.236135   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:05.236145   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.236151   29027 round_trippers.go:580]     Audit-Id: 2322dcc8-2e64-4601-a36f-f5dae3aeae17
	I0906 15:14:05.236155   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.236160   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.236165   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.236170   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.236174   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.236232   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:05.236477   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:05.236484   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.236491   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.236496   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.238399   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:05.238408   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.238414   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.238418   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.238423   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.238428   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.238432   29027 round_trippers.go:580]     Audit-Id: 23d04807-d8a3-414f-82d2-f903bfa0bc63
	I0906 15:14:05.238438   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.238640   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:05.735359   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:05.735403   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.735416   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.735427   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.738956   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:05.738969   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.738977   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.738983   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.738990   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.738997   29027 round_trippers.go:580]     Audit-Id: ec98a302-1fd4-47c0-b273-b3fe6a6603c4
	I0906 15:14:05.739003   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.739009   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.739092   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:05.739420   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:05.739432   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.739442   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.739450   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.741350   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:05.741358   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.741365   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.741370   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.741375   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.741380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.741385   29027 round_trippers.go:580]     Audit-Id: 047d62d7-1bfd-453f-aae6-542b222d44ac
	I0906 15:14:05.741390   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.741430   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:06.234079   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:06.234096   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.234105   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.234112   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.237352   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:06.237364   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.237370   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.237374   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.237379   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.237383   29027 round_trippers.go:580]     Audit-Id: 1c1d76ff-0b78-4dbb-b574-59fbdb4e5e3b
	I0906 15:14:06.237388   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.237393   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.237457   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:06.237715   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:06.237722   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.237728   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.237733   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.239825   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:06.239836   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.239841   29027 round_trippers.go:580]     Audit-Id: 7e207513-1851-4d43-9610-46fdc37e6ecb
	I0906 15:14:06.239846   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.239851   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.239856   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.239861   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.239865   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.239920   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:06.240109   29027 pod_ready.go:102] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:06.735306   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:06.735330   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.735341   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.735351   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.739873   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:06.739888   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.739896   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.739903   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.739911   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.739916   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.739921   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.739925   29027 round_trippers.go:580]     Audit-Id: f18def18-bffc-46ca-b1e6-22ceb747eabb
	I0906 15:14:06.740008   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:06.740321   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:06.740328   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.740337   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.740344   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.742514   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:06.742525   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.742530   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.742538   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.742544   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.742549   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.742554   29027 round_trippers.go:580]     Audit-Id: 53ede248-e25f-4928-b951-2e5e9ff24c7b
	I0906 15:14:06.742558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.742610   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:07.233387   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:07.233402   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.233411   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.233418   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.236442   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:07.236457   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.236463   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.236467   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.236476   29027 round_trippers.go:580]     Audit-Id: 4bb587db-be41-4aa2-9aa8-dd3faf8713e8
	I0906 15:14:07.236481   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.236490   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.236495   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.236560   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:07.236816   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:07.236823   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.236834   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.236840   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.239087   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:07.239097   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.239103   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.239107   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.239114   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.239119   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.239124   29027 round_trippers.go:580]     Audit-Id: 81b1a4cb-0d7d-49d4-9256-7bbcd2e975d8
	I0906 15:14:07.239129   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.239366   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:07.735230   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:07.735243   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.735249   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.735254   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.737792   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:07.737802   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.737807   29027 round_trippers.go:580]     Audit-Id: 5ce1581c-e11e-4cc0-9ac9-307a4af8d3f7
	I0906 15:14:07.737813   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.737818   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.737822   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.737827   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.737832   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.737881   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:07.738125   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:07.738131   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.738136   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.738142   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.740009   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:07.740019   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.740025   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.740033   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.740038   29027 round_trippers.go:580]     Audit-Id: 1774b55d-df9e-488b-9421-b59b2fa36a34
	I0906 15:14:07.740042   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.740047   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.740051   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.740105   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.233411   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:08.233427   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.233436   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.233443   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.236089   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.236111   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.236120   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.236125   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.236130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.236135   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.236139   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.236145   29027 round_trippers.go:580]     Audit-Id: 6ac90674-6cc7-490b-b447-330e260634ea
	I0906 15:14:08.236206   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:08.236457   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:08.236463   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.236470   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.236477   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.238933   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.238945   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.238953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.238958   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.238968   29027 round_trippers.go:580]     Audit-Id: 6ba70212-6e89-4e32-aa7e-c7be65a66466
	I0906 15:14:08.238980   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.238993   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.239005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.239345   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.735500   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:08.735525   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.735537   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.735547   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.738888   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:08.738903   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.738909   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.738914   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.738921   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.738925   29027 round_trippers.go:580]     Audit-Id: 37138eb0-454a-4f93-a7b9-bf228c9751a3
	I0906 15:14:08.738930   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.738935   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.738993   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:08.739246   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:08.739252   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.739258   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.739263   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.741301   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.741310   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.741315   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.741320   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.741325   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.741329   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.741339   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.741346   29027 round_trippers.go:580]     Audit-Id: 742ac44d-07d5-4f5c-9e1f-59beb1191a76
	I0906 15:14:08.741542   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.741731   29027 pod_ready.go:102] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:09.234765   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:09.234794   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.234805   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.234813   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.237578   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:09.237594   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.237601   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.237607   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.237611   29027 round_trippers.go:580]     Audit-Id: f4e405d2-cb3b-4637-bdd7-09071cd24b53
	I0906 15:14:09.237617   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.237625   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.237632   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.237709   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:09.237993   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:09.238001   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.238006   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.238011   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.239971   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:09.239980   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.239985   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.239990   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.239995   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.239999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.240004   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.240009   29027 round_trippers.go:580]     Audit-Id: 99276aa5-02b0-498c-a8ea-ab0503bae4dc
	I0906 15:14:09.240049   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:09.735411   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:09.735439   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.735473   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.735485   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.739702   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:09.739723   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.739731   29027 round_trippers.go:580]     Audit-Id: 34c15cf7-8a19-479e-8de2-5c861ff87693
	I0906 15:14:09.739737   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.739744   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.739750   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.739757   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.739762   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.739836   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:09.740172   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:09.740178   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.740184   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.740189   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.742179   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:09.742188   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.742196   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.742202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.742207   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.742211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.742216   29027 round_trippers.go:580]     Audit-Id: 3641b9d9-1779-4e34-a07f-6f5b2ec0a6bc
	I0906 15:14:09.742220   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.742270   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.234698   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:10.234719   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.234731   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.234741   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.238855   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:10.238871   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.238879   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.238883   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.238888   29027 round_trippers.go:580]     Audit-Id: d336bf35-d215-48b7-8aa4-c670535e90a5
	I0906 15:14:10.238894   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.238899   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.238904   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.238959   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:10.239229   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.239235   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.239240   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.239260   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.241325   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:10.241336   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.241342   29027 round_trippers.go:580]     Audit-Id: 7c834241-fe3c-43eb-b3d1-732009b63e5e
	I0906 15:14:10.241347   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.241351   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.241356   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.241361   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.241368   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.241414   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.734234   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:10.734245   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.734252   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.734257   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.736488   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:10.736499   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.736504   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.736508   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.736512   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.736516   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.736521   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.736527   29027 round_trippers.go:580]     Audit-Id: 9736e5d0-0792-4ffd-b136-ee4ca4a6eaa5
	I0906 15:14:10.736815   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1107","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6114 chars]
	I0906 15:14:10.737068   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.737076   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.737082   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.737089   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.738863   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.738872   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.738878   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.738884   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.738889   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.738893   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.738898   29027 round_trippers.go:580]     Audit-Id: 25f472ed-d603-4820-a3b2-dc72441c2c28
	I0906 15:14:10.738906   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.738959   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.739150   29027 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:10.739159   29027 pod_ready.go:81] duration metric: took 6.552837963s waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:10.739172   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:10.739198   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:10.739201   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.739207   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.739213   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.741052   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.741061   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.741066   29027 round_trippers.go:580]     Audit-Id: 583341c9-7cba-4043-a447-96c4817c0ebd
	I0906 15:14:10.741071   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.741076   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.741081   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.741085   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.741090   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.741155   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1081","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8714 chars]
	I0906 15:14:10.741406   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.741412   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.741418   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.741423   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.743167   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.743176   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.743181   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.743186   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.743191   29027 round_trippers.go:580]     Audit-Id: 3adafee3-0e46-456e-ae91-cffe2411127d
	I0906 15:14:10.743196   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.743200   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.743205   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.743247   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.244021   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:11.244036   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.244044   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.244051   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.247044   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:11.247054   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.247062   29027 round_trippers.go:580]     Audit-Id: 1a1bb9e8-b15c-4922-920f-a72817904e85
	I0906 15:14:11.247067   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.247072   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.247077   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.247081   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.247086   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.247156   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1081","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8714 chars]
	I0906 15:14:11.247428   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.247433   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.247439   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.247444   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.249305   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.249316   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.249321   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.249327   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.249331   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.249336   29027 round_trippers.go:580]     Audit-Id: 40cac01a-055a-4332-baf1-86de31ea6423
	I0906 15:14:11.249341   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.249345   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.249390   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.743647   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:11.743666   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.743678   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.743687   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.747373   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:11.747384   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.747391   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.747397   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.747401   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.747407   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.747413   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.747418   29027 round_trippers.go:580]     Audit-Id: b32ac438-2b62-411d-8c62-05e2edd91996
	I0906 15:14:11.747578   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1113","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8470 chars]
	I0906 15:14:11.747846   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.747853   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.747859   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.747864   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.749818   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.749825   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.749830   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.749835   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.749840   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.749844   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.749849   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.749853   29027 round_trippers.go:580]     Audit-Id: c362f4c1-37b2-4661-8c58-141ce4a41552
	I0906 15:14:11.749894   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.750712   29027 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:11.750728   29027 pod_ready.go:81] duration metric: took 1.011545386s waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:11.750750   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:11.750812   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:11.750819   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.750828   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.750836   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.753375   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:11.753385   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.753391   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.753395   29027 round_trippers.go:580]     Audit-Id: 22bd3574-8a51-4a00-8f8f-3c05bbb3c6cc
	I0906 15:14:11.753403   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.753409   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.753413   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.753420   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.753533   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1066","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8307 chars]
	I0906 15:14:11.753795   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.753801   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.753807   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.753812   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.755762   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.755770   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.755775   29027 round_trippers.go:580]     Audit-Id: fdff8381-c2a8-42f0-8e1a-6b64089767ab
	I0906 15:14:11.755780   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.755785   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.755789   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.755795   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.755799   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.755905   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.256214   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:12.256229   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.256237   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.256246   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.259105   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:12.259119   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.259125   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.259131   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.259136   29027 round_trippers.go:580]     Audit-Id: 5cd8ab23-eafc-49cf-9c5f-bf19977d6843
	I0906 15:14:12.259141   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.259147   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.259153   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.259217   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1066","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8307 chars]
	I0906 15:14:12.259504   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.259511   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.259517   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.259522   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.261371   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.261380   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.261386   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.261391   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.261396   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.261400   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.261405   29027 round_trippers.go:580]     Audit-Id: d19c562b-acc4-47da-adcf-1b6048dc96a6
	I0906 15:14:12.261411   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.261527   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.757163   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:12.757184   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.757196   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.757207   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.760950   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:12.760965   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.760974   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.760981   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.760987   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.760993   29027 round_trippers.go:580]     Audit-Id: 7a13768f-e1d9-4201-ae0f-d2bd87d77e47
	I0906 15:14:12.761000   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.761006   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.761517   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1120","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8045 chars]
	I0906 15:14:12.761825   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.761832   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.761838   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.761843   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.763837   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.763846   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.763851   29027 round_trippers.go:580]     Audit-Id: e2b67775-3ee7-42f0-8209-a63321f1d2d3
	I0906 15:14:12.763857   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.763862   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.763867   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.763872   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.763877   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.763921   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.764099   29027 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.764109   29027 pod_ready.go:81] duration metric: took 1.013346421s waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.764115   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.764138   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:14:12.764142   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.764148   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.764153   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.765860   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.765868   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.765873   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.765878   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.765882   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.765888   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.765893   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.765898   29027 round_trippers.go:580]     Audit-Id: a41b2170-c46f-4c8b-b6a1-104d2c0f333c
	I0906 15:14:12.765938   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"887","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5997 chars]
	I0906 15:14:12.766163   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:14:12.766169   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.766174   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.766179   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.767526   29027 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0906 15:14:12.767534   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.767540   29027 round_trippers.go:580]     Content-Length: 238
	I0906 15:14:12.767545   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.767554   29027 round_trippers.go:580]     Audit-Id: 5904103e-cc22-4bcf-a4b5-faa187929fd1
	I0906 15:14:12.767558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.767564   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.767569   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.767574   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.767590   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-20220906150606-22187-m03\" not found","reason":"NotFound","details":{"name":"multinode-20220906150606-22187-m03","kind":"nodes"},"code":404}
	I0906 15:14:12.767687   29027 pod_ready.go:97] node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:12.767694   29027 pod_ready.go:81] duration metric: took 3.574788ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	E0906 15:14:12.767700   29027 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:12.767705   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.767728   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:12.767732   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.767737   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.767742   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.769469   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.769477   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.769482   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.769488   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.769494   29027 round_trippers.go:580]     Audit-Id: fac150e7-ed19-4eed-ae4b-f04a075beafb
	I0906 15:14:12.769498   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.769503   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.769508   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.769548   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"1084","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5765 chars]
	I0906 15:14:12.769773   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.769778   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.769784   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.769789   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.771577   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.771585   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.771590   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.771595   29027 round_trippers.go:580]     Audit-Id: 3ca4ab75-d723-46c6-bb30-aabe54d18d8e
	I0906 15:14:12.771599   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.771604   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.771608   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.771613   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.771646   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.771835   29027 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.771841   29027 pod_ready.go:81] duration metric: took 4.131523ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.771847   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.771867   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:12.771871   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.771877   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.771882   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.773648   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.773657   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.773663   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.773668   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.773672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.773678   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.773683   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.773689   29027 round_trippers.go:580]     Audit-Id: def20370-b2ce-40f7-ab5c-1bc5de5d3026
	I0906 15:14:12.773733   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"897","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5770 chars]
	I0906 15:14:12.773957   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:12.773962   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.773968   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.773974   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.775768   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.775777   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.775782   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.775787   29027 round_trippers.go:580]     Audit-Id: 63735ea6-d066-498e-9281-7ca90b93844b
	I0906 15:14:12.775792   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.775797   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.775802   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.775808   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.775841   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a","resourceVersion":"833","creationTimestamp":"2022-09-06T22:10:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":
{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-at [truncated 3821 chars]
	I0906 15:14:12.776009   29027 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.776016   29027 pod_ready.go:81] duration metric: took 4.165395ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.776021   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.936278   29027 request.go:533] Waited for 160.218151ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:12.936317   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:12.936322   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.936387   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.936400   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.939543   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:12.939561   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.939568   29027 round_trippers.go:580]     Audit-Id: ef7226ba-b5f8-45ac-908b-ab45390aeb15
	I0906 15:14:12.939574   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.939581   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.939587   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.939594   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.939601   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.939690   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:13.134425   29027 request.go:533] Waited for 194.450544ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.134479   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.134486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.134497   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.134505   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.137645   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:13.137658   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.137663   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.137668   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.137672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.137677   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.137681   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.137687   29027 round_trippers.go:580]     Audit-Id: 9937fbdd-c523-4f78-9f34-65e8ed352eaa
	I0906 15:14:13.137748   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:13.639233   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:13.639255   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.639267   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.639279   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.643124   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:13.643136   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.643142   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.643146   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.643150   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.643156   29027 round_trippers.go:580]     Audit-Id: e8683bdc-3044-42b4-a149-ec62595f451c
	I0906 15:14:13.643160   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.643165   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.643221   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:13.643445   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.643451   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.643456   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.643462   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.644934   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:13.644944   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.644951   29027 round_trippers.go:580]     Audit-Id: ade35b5e-2ea8-44cc-ab51-d669e0a6e0f9
	I0906 15:14:13.644958   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.644963   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.644968   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.644973   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.644977   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.645151   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:14.138082   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:14.138092   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.138099   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.138105   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.140473   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:14.140488   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.140500   29027 round_trippers.go:580]     Audit-Id: 14c5c68f-939b-43bb-97ad-16ac5c611aa7
	I0906 15:14:14.140508   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.140518   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.140528   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.140534   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.140543   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.141099   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:14.141374   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:14.141388   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.141399   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.141413   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.143499   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:14.143511   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.143516   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.143523   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.143529   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.143534   29027 round_trippers.go:580]     Audit-Id: 41f3f7f2-b489-4631-831f-954e40b3fc69
	I0906 15:14:14.143540   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.143544   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.143600   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:14.640166   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:14.640190   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.640203   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.640233   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.644237   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:14.644253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.644261   29027 round_trippers.go:580]     Audit-Id: 764cc717-dbfd-4daf-9df4-08278120bf15
	I0906 15:14:14.644273   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.644281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.644287   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.644293   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.644302   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.644370   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:14.644658   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:14.644666   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.644674   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.644681   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.646585   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:14.646594   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.646600   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.646605   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.646610   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.646615   29027 round_trippers.go:580]     Audit-Id: 6232b258-1f91-4c99-ad47-9e023cc3fdcb
	I0906 15:14:14.646620   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.646624   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.646695   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.138136   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:15.138159   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.138171   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.138180   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.141416   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.141426   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.141431   29027 round_trippers.go:580]     Audit-Id: c91885bb-a781-4c37-a0e7-fedf3ecd7299
	I0906 15:14:15.141437   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.141442   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.141447   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.141452   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.141456   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.141503   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1138","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 4927 chars]
	I0906 15:14:15.141719   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.141725   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.141731   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.141736   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.143344   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:15.143353   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.143358   29027 round_trippers.go:580]     Audit-Id: 287d7240-5abf-4b3e-b560-0f0b4edb1602
	I0906 15:14:15.143363   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.143367   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.143372   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.143377   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.143382   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.143723   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.143902   29027 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:15.143911   29027 pod_ready.go:81] duration metric: took 2.367876578s waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:15.143918   29027 pod_ready.go:38] duration metric: took 10.971514597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:15.143932   29027 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:14:15.151523   29027 command_runner.go:130] > -16
	I0906 15:14:15.151552   29027 ops.go:34] apiserver oom_adj: -16
	I0906 15:14:15.151557   29027 kubeadm.go:631] restartCluster took 22.956516772s
	I0906 15:14:15.151563   29027 kubeadm.go:398] StartCluster complete in 22.99269725s
	I0906 15:14:15.151576   29027 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:14:15.151643   29027 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.152033   29027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:14:15.152665   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.152829   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:15.153018   29027 round_trippers.go:463] GET https://127.0.0.1:57276/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 15:14:15.153024   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.153030   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.153035   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.155276   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.155286   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.155291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.155297   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.155302   29027 round_trippers.go:580]     Content-Length: 292
	I0906 15:14:15.155306   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.155315   29027 round_trippers.go:580]     Audit-Id: 566c0fe5-6793-469e-868e-2b5a58149f9a
	I0906 15:14:15.155320   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.155325   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.155338   29027 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a49f3069-8a92-4785-ab5f-7ea0a1721073","resourceVersion":"1132","creationTimestamp":"2022-09-06T22:06:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0906 15:14:15.155416   29027 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220906150606-22187" rescaled to 1
	I0906 15:14:15.155447   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:14:15.155446   29027 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:14:15.155475   29027 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0906 15:14:15.178528   29027 out.go:177] * Verifying Kubernetes components...
	I0906 15:14:15.155604   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:15.178561   29027 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220906150606-22187"
	I0906 15:14:15.178562   29027 addons.go:65] Setting default-storageclass=true in profile "multinode-20220906150606-22187"
	I0906 15:14:15.211248   29027 command_runner.go:130] > apiVersion: v1
	I0906 15:14:15.220319   29027 command_runner.go:130] > data:
	I0906 15:14:15.220331   29027 command_runner.go:130] >   Corefile: |
	I0906 15:14:15.220332   29027 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220906150606-22187"
	I0906 15:14:15.220339   29027 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220906150606-22187"
	I0906 15:14:15.220339   29027 command_runner.go:130] >     .:53 {
	W0906 15:14:15.220347   29027 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:14:15.220349   29027 command_runner.go:130] >         errors
	I0906 15:14:15.220355   29027 command_runner.go:130] >         health {
	I0906 15:14:15.220362   29027 command_runner.go:130] >            lameduck 5s
	I0906 15:14:15.220362   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:14:15.220367   29027 command_runner.go:130] >         }
	I0906 15:14:15.220371   29027 command_runner.go:130] >         ready
	I0906 15:14:15.220380   29027 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0906 15:14:15.220385   29027 command_runner.go:130] >            pods insecure
	I0906 15:14:15.220387   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:15.220393   29027 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0906 15:14:15.220398   29027 command_runner.go:130] >            ttl 30
	I0906 15:14:15.220402   29027 command_runner.go:130] >         }
	I0906 15:14:15.220406   29027 command_runner.go:130] >         prometheus :9153
	I0906 15:14:15.220411   29027 command_runner.go:130] >         hosts {
	I0906 15:14:15.220417   29027 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0906 15:14:15.220423   29027 command_runner.go:130] >            fallthrough
	I0906 15:14:15.220429   29027 command_runner.go:130] >         }
	I0906 15:14:15.220435   29027 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0906 15:14:15.220441   29027 command_runner.go:130] >            max_concurrent 1000
	I0906 15:14:15.220447   29027 command_runner.go:130] >         }
	I0906 15:14:15.220452   29027 command_runner.go:130] >         cache 30
	I0906 15:14:15.220456   29027 command_runner.go:130] >         loop
	I0906 15:14:15.220462   29027 command_runner.go:130] >         reload
	I0906 15:14:15.220469   29027 command_runner.go:130] >         loadbalance
	I0906 15:14:15.220474   29027 command_runner.go:130] >     }
	I0906 15:14:15.220479   29027 command_runner.go:130] > kind: ConfigMap
	I0906 15:14:15.220483   29027 command_runner.go:130] > metadata:
	I0906 15:14:15.220487   29027 command_runner.go:130] >   creationTimestamp: "2022-09-06T22:06:35Z"
	I0906 15:14:15.220490   29027 command_runner.go:130] >   name: coredns
	I0906 15:14:15.220494   29027 command_runner.go:130] >   namespace: kube-system
	I0906 15:14:15.220498   29027 command_runner.go:130] >   resourceVersion: "371"
	I0906 15:14:15.220512   29027 command_runner.go:130] >   uid: 99586de8-1370-4877-aa2d-6bd1c7354337
	I0906 15:14:15.220569   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.220571   29027 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:14:15.220682   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.230847   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.294084   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.314614   29027 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:14:15.314904   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:15.335586   29027 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:14:15.335607   29027 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:14:15.335717   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.335882   29027 round_trippers.go:463] GET https://127.0.0.1:57276/apis/storage.k8s.io/v1/storageclasses
	I0906 15:14:15.335896   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.335909   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.335922   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.339595   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.339613   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.339619   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.339623   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.339628   29027 round_trippers.go:580]     Content-Length: 1274
	I0906 15:14:15.339633   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.339637   29027 round_trippers.go:580]     Audit-Id: 9a3b5a9e-1744-4d3f-b2fa-be3c3759ced1
	I0906 15:14:15.339641   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.339645   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.339686   29027 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1138"},"items":[{"metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0906 15:14:15.340070   29027 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:14:15.340105   29027 round_trippers.go:463] PUT https://127.0.0.1:57276/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 15:14:15.340109   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.340115   29027 round_trippers.go:473]     Content-Type: application/json
	I0906 15:14:15.340120   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.340125   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.341941   29027 node_ready.go:35] waiting up to 6m0s for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:14:15.342010   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.342015   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.342021   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.342032   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.344133   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.344142   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.344147   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.344159   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.344159   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.344167   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.344173   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.344175   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.344179   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.344186   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.344201   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.344201   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.344207   29027 round_trippers.go:580]     Content-Length: 1220
	I0906 15:14:15.344211   29027 round_trippers.go:580]     Audit-Id: df18436a-8f86-44a9-8a96-b0631ed12e71
	I0906 15:14:15.344213   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.344220   29027 round_trippers.go:580]     Audit-Id: ea4686e1-24c8-4463-90bf-1c9e9df78d3c
	I0906 15:14:15.344226   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.344258   29027 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:14:15.344287   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.344366   29027 addons.go:153] Setting addon default-storageclass=true in "multinode-20220906150606-22187"
	W0906 15:14:15.344376   29027 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:14:15.344397   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:15.344510   29027 node_ready.go:49] node "multinode-20220906150606-22187" has status "Ready":"True"
	I0906 15:14:15.344518   29027 node_ready.go:38] duration metric: took 2.562ms waiting for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:14:15.344528   29027 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:15.344566   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:15.344574   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.344582   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.344590   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.344796   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.348943   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:15.348971   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.348980   29027 round_trippers.go:580]     Audit-Id: 1ccba07a-4566-487a-afcc-c75fc472142a
	I0906 15:14:15.348988   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.348996   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.349005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.349011   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.349023   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.349830   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1138"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86023 chars]
	I0906 15:14:15.352004   29027 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:15.352054   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:15.352059   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.352064   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.352070   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.354643   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.354664   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.354691   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.354703   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.354710   29027 round_trippers.go:580]     Audit-Id: 38afbccf-a57a-4cac-8196-924f7e1539ca
	I0906 15:14:15.354716   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.354723   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.354728   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.354803   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:15.403881   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:15.410760   29027 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:14:15.410771   29027 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:14:15.410831   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.474843   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:15.493757   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:14:15.534855   29027 request.go:533] Waited for 179.694612ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.534893   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.534898   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.534905   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.534911   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.537280   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.537293   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.537299   29027 round_trippers.go:580]     Audit-Id: 34af4cc5-3ceb-4e39-ba92-8adb75e37b52
	I0906 15:14:15.537307   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.537312   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.537317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.537322   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.537326   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.537388   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.562417   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:14:15.661103   29027 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0906 15:14:15.663029   29027 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0906 15:14:15.665062   29027 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:14:15.667212   29027 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:14:15.669201   29027 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0906 15:14:15.674660   29027 command_runner.go:130] > pod/storage-provisioner configured
	I0906 15:14:15.730046   29027 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0906 15:14:15.778498   29027 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:14:15.799376   29027 addons.go:414] enableAddons completed in 643.899729ms
	I0906 15:14:16.038054   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:16.038072   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.038081   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.038087   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.041162   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:16.041179   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.041185   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.041191   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.041196   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.041201   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.041206   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.041210   29027 round_trippers.go:580]     Audit-Id: 87b7b163-5168-4afe-89fe-fb71533a4074
	I0906 15:14:16.041278   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:16.041595   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:16.041605   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.041611   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.041615   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.043538   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:16.043547   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.043553   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.043558   29027 round_trippers.go:580]     Audit-Id: b767f0fe-33b6-428c-97bb-feb252fa3bf0
	I0906 15:14:16.043563   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.043567   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.043572   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.043582   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.043630   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:16.538109   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:16.538123   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.538132   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.538139   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.541177   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:16.541187   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.541192   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.541197   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.541202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.541206   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.541211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.541216   29027 round_trippers.go:580]     Audit-Id: 799725a2-520c-49a0-8eb0-32c857f93046
	I0906 15:14:16.541280   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:16.541572   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:16.541578   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.541585   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.541592   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.543400   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:16.543410   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.543415   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.543420   29027 round_trippers.go:580]     Audit-Id: 07cb6d2e-4b64-46dc-ae91-4a0dd994d3d4
	I0906 15:14:16.543428   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.543434   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.543438   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.543443   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.543618   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.038578   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:17.038603   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.038615   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.038626   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.041741   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:17.041754   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.041764   29027 round_trippers.go:580]     Audit-Id: a7f36701-16dc-4907-826a-364df98443f6
	I0906 15:14:17.041773   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.041790   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.041799   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.041808   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.041820   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.041995   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:17.042288   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:17.042294   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.042300   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.042305   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.044055   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:17.044072   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.044090   29027 round_trippers.go:580]     Audit-Id: 9fe55fe8-abaa-4b0a-bd70-02ec59a03f2f
	I0906 15:14:17.044103   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.044113   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.044122   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.044130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.044137   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.044183   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.537934   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:17.537951   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.537963   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.537974   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.541074   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:17.541087   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.541093   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.541097   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.541102   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.541106   29027 round_trippers.go:580]     Audit-Id: 3e3506e9-5f28-4ab8-b88c-c33c4834bd3b
	I0906 15:14:17.541114   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.541119   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.541183   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:17.541483   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:17.541489   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.541494   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.541499   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.543268   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:17.543278   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.543284   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.543291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.543297   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.543307   29027 round_trippers.go:580]     Audit-Id: 19364591-28a4-4447-abc3-fd3e6269d908
	I0906 15:14:17.543312   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.543317   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.543594   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.543777   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:18.037768   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:18.037786   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.037794   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.037801   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.040812   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:18.040826   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.040831   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.040852   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.040860   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.040865   29027 round_trippers.go:580]     Audit-Id: 9287653c-0505-4e9b-ac66-e890953d6357
	I0906 15:14:18.040869   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.040877   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.041041   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:18.041369   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:18.041377   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.041383   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.041387   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.043316   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:18.043328   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.043334   29027 round_trippers.go:580]     Audit-Id: 3ebbc5ed-527e-4e36-a259-75b8cbda2f75
	I0906 15:14:18.043338   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.043342   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.043364   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.043373   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.043380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.043424   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:18.537858   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:18.537875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.537884   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.537891   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.540967   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:18.540980   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.540986   29027 round_trippers.go:580]     Audit-Id: 580c7abc-d1eb-4d7a-ba5a-5a7bacf8f3dc
	I0906 15:14:18.540991   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.540995   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.540999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.541024   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.541029   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.541099   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:18.541391   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:18.541397   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.541403   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.541409   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.543264   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:18.543273   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.543278   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.543283   29027 round_trippers.go:580]     Audit-Id: 2118579d-4f1b-4484-ab03-e6ee5545445d
	I0906 15:14:18.543288   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.543292   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.543298   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.543303   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.543352   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:19.037851   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:19.037875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.037883   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.037891   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.040667   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:19.040680   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.040688   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.040699   29027 round_trippers.go:580]     Audit-Id: 7c6ec055-f067-4c33-824d-d9339b29d487
	I0906 15:14:19.040710   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.040723   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.040732   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.040738   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.040806   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:19.041171   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:19.041178   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.041187   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.041194   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.043616   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:19.043634   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.043641   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.043647   29027 round_trippers.go:580]     Audit-Id: 1fb43a50-be41-4d88-8aaa-fc6e71f51b8f
	I0906 15:14:19.043655   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.043663   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.043672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.043678   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.043731   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:19.537849   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:19.537866   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.537876   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.537884   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.540988   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:19.541000   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.541006   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.541010   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.541014   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.541019   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.541024   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.541033   29027 round_trippers.go:580]     Audit-Id: 5945fa61-8216-46ce-85bf-1dbce6dbe601
	I0906 15:14:19.541102   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:19.541404   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:19.541410   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.541415   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.541421   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.543268   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:19.543277   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.543283   29027 round_trippers.go:580]     Audit-Id: ba84b224-dabb-4d94-bd00-edbf1e790d1e
	I0906 15:14:19.543289   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.543296   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.543306   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.543311   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.543316   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.543361   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:20.037761   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:20.037774   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.037781   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.037786   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.044037   29027 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:14:20.044050   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.044056   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.044063   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.044069   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.044075   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.044079   29027 round_trippers.go:580]     Audit-Id: 9612119d-2f38-4aea-ab63-9c71e920c73f
	I0906 15:14:20.044084   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.045023   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:20.045347   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:20.045353   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.045359   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.045364   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.047696   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:20.047708   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.047715   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.047722   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.047728   29027 round_trippers.go:580]     Audit-Id: a109a508-b05a-493c-a1d6-d5bcd804b6d3
	I0906 15:14:20.047733   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.047741   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.047746   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.047882   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:20.048084   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:20.539210   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:20.539228   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.539237   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.539244   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.542262   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:20.542277   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.542285   29027 round_trippers.go:580]     Audit-Id: 77e69f45-a96b-42dc-8c82-df3386a476c2
	I0906 15:14:20.542291   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.542296   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.542301   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.542305   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.542313   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.542383   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:20.542725   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:20.542732   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.542738   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.542743   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.544682   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:20.544691   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.544696   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.544701   29027 round_trippers.go:580]     Audit-Id: 7da4aa16-d483-406f-a975-66d6dee6f8d1
	I0906 15:14:20.544706   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.544710   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.544716   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.544721   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.544772   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:21.039890   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:21.039919   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.039933   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.039943   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.043716   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:21.043728   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.043733   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.043737   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.043742   29027 round_trippers.go:580]     Audit-Id: 8b605e70-e3d9-4aef-a3b0-a376d6fa2069
	I0906 15:14:21.043752   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.043756   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.043760   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.044058   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:21.044356   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:21.044362   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.044367   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.044373   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.046149   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:21.046157   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.046161   29027 round_trippers.go:580]     Audit-Id: 995dc6cb-c3bc-4850-b49b-d8d701f507f0
	I0906 15:14:21.046166   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.046173   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.046178   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.046183   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.046187   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.046234   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:21.538965   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:21.538984   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.538993   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.539000   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.542223   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:21.542237   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.542243   29027 round_trippers.go:580]     Audit-Id: 39edbcda-5de0-4720-b996-4908b757a8f2
	I0906 15:14:21.542251   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.542255   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.542260   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.542268   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.542273   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.542341   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:21.542644   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:21.542651   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.542657   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.542662   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.544922   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:21.544933   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.544940   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.544945   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.544950   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.544955   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.544960   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.544965   29027 round_trippers.go:580]     Audit-Id: a62ea4a9-3f8b-4e61-b73d-3f4bd8cca9e5
	I0906 15:14:21.545160   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.039504   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:22.039550   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.039573   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.039581   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.042664   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:22.042676   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.042681   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.042685   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.042690   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.042695   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.042700   29027 round_trippers.go:580]     Audit-Id: bce130fe-a5c2-4231-a10d-6bf1335b6362
	I0906 15:14:22.042704   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.042763   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:22.043055   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:22.043061   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.043067   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.043072   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.045572   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:22.045581   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.045588   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.045594   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.045598   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.045603   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.045608   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.045612   29027 round_trippers.go:580]     Audit-Id: 1263b504-66e0-49f8-9a13-0a8a05dc31b3
	I0906 15:14:22.045656   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.537803   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:22.537814   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.537820   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.537825   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.540282   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:22.540292   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.540298   29027 round_trippers.go:580]     Audit-Id: 412e21e7-8317-43c3-ac4e-1ed170d65eb5
	I0906 15:14:22.540305   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.540316   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.540327   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.540357   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.540369   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.540636   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:22.540926   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:22.540932   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.540937   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.540942   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.542685   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:22.542694   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.542699   29027 round_trippers.go:580]     Audit-Id: a76985b3-d8ce-4cb3-b9f7-4c47407dc47b
	I0906 15:14:22.542704   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.542709   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.542713   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.542718   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.542723   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.542765   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.542965   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:23.038174   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:23.038198   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.038207   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.038214   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.041500   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:23.041513   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.041519   29027 round_trippers.go:580]     Audit-Id: 591fb38b-0f96-4830-9a74-0b47b227645d
	I0906 15:14:23.041523   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.041528   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.041532   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.041537   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.041541   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.041609   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:23.041924   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:23.041930   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.041936   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.041941   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.044053   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:23.044062   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.044068   29027 round_trippers.go:580]     Audit-Id: e25c8357-9972-4742-bf50-867b9524a93d
	I0906 15:14:23.044073   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.044077   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.044082   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.044087   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.044092   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.044139   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:23.538283   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:23.538303   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.538315   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.538325   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.541350   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:23.541360   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.541367   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.541373   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.541377   29027 round_trippers.go:580]     Audit-Id: 6fe13596-2642-46b9-8f2c-0394450a1f89
	I0906 15:14:23.541382   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.541386   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.541391   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.541451   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:23.541736   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:23.541742   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.541748   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.541752   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.543611   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:23.543621   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.543627   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.543631   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.543636   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.543641   29027 round_trippers.go:580]     Audit-Id: 768595bc-4745-4d7f-8207-faa5ca98df79
	I0906 15:14:23.543646   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.543651   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.543725   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.038331   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:24.038351   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.038375   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.038384   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.041568   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:24.041581   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.041591   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.041598   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.041605   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.041615   29027 round_trippers.go:580]     Audit-Id: 2a52053e-d3e1-4ac8-9d25-60e7f4f2323e
	I0906 15:14:24.041623   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.041628   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.041685   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:24.041998   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:24.042006   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.042017   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.042031   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.043801   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:24.043809   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.043814   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.043819   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.043824   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.043829   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.043833   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.043840   29027 round_trippers.go:580]     Audit-Id: 7704f072-c5f6-4b6a-8081-160a4ee8313e
	I0906 15:14:24.043888   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.537911   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:24.537928   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.537937   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.537948   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.540939   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:24.540950   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.540955   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.540960   29027 round_trippers.go:580]     Audit-Id: d329839b-c8ec-4dad-8517-e9aa9323fb02
	I0906 15:14:24.540965   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.540969   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.540975   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.540981   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.541051   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:24.541330   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:24.541336   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.541343   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.541354   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.543088   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:24.543096   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.543101   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.543106   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.543111   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.543115   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.543120   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.543124   29027 round_trippers.go:580]     Audit-Id: aed0ecbd-59d0-44c8-835d-baad3c05e210
	I0906 15:14:24.543808   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.544232   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:25.037816   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:25.037831   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.037837   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.037842   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.040674   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:25.040684   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.040689   29027 round_trippers.go:580]     Audit-Id: 851b12d6-b83c-4c40-b2d7-aab8cc966a29
	I0906 15:14:25.040694   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.040698   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.040703   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.040707   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.040712   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.040781   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:25.041080   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:25.041087   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.041093   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.041098   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.042909   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:25.042918   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.042924   29027 round_trippers.go:580]     Audit-Id: 79a5c82b-4b00-4387-bb8d-e2d369f36fff
	I0906 15:14:25.042930   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.042941   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.042948   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.042953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.042958   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.043175   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:25.538087   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:25.538100   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.538106   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.538111   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.540524   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:25.540534   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.540540   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.540544   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.540549   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.540553   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.540558   29027 round_trippers.go:580]     Audit-Id: 8e68392b-a1f7-4713-8192-7b131bb32e7f
	I0906 15:14:25.540563   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.540638   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:25.540947   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:25.540952   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.540958   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.540963   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.542868   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:25.542877   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.542883   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.542887   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.542892   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.542897   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.542902   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.542906   29027 round_trippers.go:580]     Audit-Id: 4f11cc81-344e-4edc-a496-6c1168c4ea2f
	I0906 15:14:25.542992   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.038022   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:26.038047   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.038058   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.038068   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.041630   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.041643   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.041649   29027 round_trippers.go:580]     Audit-Id: 61b535f8-0765-47d2-ad21-24bfc2ffe936
	I0906 15:14:26.041659   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.041664   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.041669   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.041674   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.041680   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.041755   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:26.042056   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:26.042062   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.042070   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.042078   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.043890   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:26.043900   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.043905   29027 round_trippers.go:580]     Audit-Id: ba9e1b11-219e-4879-b1c9-158b55a783fb
	I0906 15:14:26.043910   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.043915   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.043920   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.043924   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.043929   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.043976   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.539863   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:26.539883   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.539895   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.539905   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.542923   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.542935   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.542940   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.542945   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.542949   29027 round_trippers.go:580]     Audit-Id: 9430c572-4184-4355-ad38-6c0a27cd5b02
	I0906 15:14:26.542954   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.542958   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.542962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.543033   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:26.543333   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:26.543339   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.543347   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.543354   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.546660   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.546672   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.546677   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.546682   29027 round_trippers.go:580]     Audit-Id: a96cd79a-5607-422f-83e0-9e89709c8242
	I0906 15:14:26.546686   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.546691   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.546699   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.546705   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.547155   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.547349   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:27.037849   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:27.037879   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.037926   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.037940   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.041763   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:27.041781   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.041788   29027 round_trippers.go:580]     Audit-Id: 515478cb-9474-4102-9159-ddbe923a3452
	I0906 15:14:27.041794   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.041803   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.041810   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.041818   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.041826   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.041912   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:27.042289   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:27.042296   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.042304   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.042310   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.044256   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:27.044266   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.044271   29027 round_trippers.go:580]     Audit-Id: 5f9dda77-c625-4131-bfef-754b506115e0
	I0906 15:14:27.044277   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.044281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.044286   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.044291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.044296   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.044422   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:27.538972   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:27.538997   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.539013   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.539025   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.542375   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:27.542388   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.542399   29027 round_trippers.go:580]     Audit-Id: d6ed2255-f7ae-494f-8992-986068b49dd6
	I0906 15:14:27.542405   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.542409   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.542414   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.542419   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.542424   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.542489   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:27.542780   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:27.542787   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.542796   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.542809   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.545153   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:27.545163   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.545168   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.545174   29027 round_trippers.go:580]     Audit-Id: 5f42c095-2b94-4d85-bf17-2e09887c6c8e
	I0906 15:14:27.545178   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.545183   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.545187   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.545192   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.545240   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:28.037831   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:28.037856   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.037890   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.037905   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.041237   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:28.041246   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.041252   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.041260   29027 round_trippers.go:580]     Audit-Id: fc626f2b-7bcb-4aad-af18-57d04d7d2dba
	I0906 15:14:28.041265   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.041270   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.041301   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.041306   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.041365   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:28.041669   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:28.041676   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.041681   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.041686   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.044357   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:28.044368   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.044373   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.044379   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.044387   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.044392   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.044397   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.044402   29027 round_trippers.go:580]     Audit-Id: 78f93e3c-9e82-4f4c-98e3-b4e0bcbef40b
	I0906 15:14:28.044452   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:28.539927   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:28.539951   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.539964   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.539975   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.543352   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:28.543365   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.543370   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.543375   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.543379   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.543384   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.543389   29027 round_trippers.go:580]     Audit-Id: 6c4609a8-f36f-45fd-a5b1-586241096d7f
	I0906 15:14:28.543393   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.543462   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:28.543759   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:28.543765   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.543772   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.543777   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.545526   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:28.545536   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.545541   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.545546   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.545554   29027 round_trippers.go:580]     Audit-Id: e21e4e6d-bd7e-4da1-947a-69bbab18a276
	I0906 15:14:28.545558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.545563   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.545567   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.545615   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:29.039636   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:29.039660   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.039696   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.039734   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.043597   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:29.043613   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.043626   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.043634   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.043642   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.043655   29027 round_trippers.go:580]     Audit-Id: 93ca9f34-5dff-48cf-af91-d3d03e7f89ef
	I0906 15:14:29.043662   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.043670   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.043761   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:29.044148   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:29.044154   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.044159   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.044164   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.045912   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:29.045922   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.045930   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.045937   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.045943   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.045951   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.045957   29027 round_trippers.go:580]     Audit-Id: 59fb3f2a-46ca-4ac4-81d2-848a09e43435
	I0906 15:14:29.045978   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.046193   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:29.046386   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:29.539964   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:29.539985   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.539998   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.540009   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.543741   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:29.543757   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.543765   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.543771   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.543777   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.543784   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.543790   29027 round_trippers.go:580]     Audit-Id: bcafdd23-527d-4cbb-b4ff-d990e5f55a54
	I0906 15:14:29.543797   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.543876   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:29.544247   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:29.544261   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.544269   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.544278   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.546155   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:29.546164   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.546170   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.546174   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.546180   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.546185   29027 round_trippers.go:580]     Audit-Id: 94e1effe-2fa0-4bd1-b2ac-7acf70a128a1
	I0906 15:14:29.546189   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.546194   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.546238   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:30.039893   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:30.039915   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.039927   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.039938   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.043031   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:30.043041   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.043047   29027 round_trippers.go:580]     Audit-Id: cce90fd0-f2cc-4157-a016-67f619a6fb83
	I0906 15:14:30.043068   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.043082   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.043088   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.043094   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.043099   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.043205   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:30.043497   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:30.043503   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.043509   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.043514   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.045579   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:30.045590   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.045597   29027 round_trippers.go:580]     Audit-Id: b9871ec8-59fa-4161-9d1c-7f8528e230d9
	I0906 15:14:30.045604   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.045609   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.045613   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.045618   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.045622   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.045679   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:30.539363   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:30.539385   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.539407   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.539418   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.543190   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:30.543205   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.543212   29027 round_trippers.go:580]     Audit-Id: 61c3f193-0ca9-474a-a53f-383ef29bb613
	I0906 15:14:30.543219   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.543227   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.543234   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.543239   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.543245   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.543347   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:30.543681   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:30.543688   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.543694   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.543700   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.545826   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:30.545836   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.545842   29027 round_trippers.go:580]     Audit-Id: 9201c637-93f1-4825-9e5f-360f20d666c7
	I0906 15:14:30.545846   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.545852   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.545857   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.545862   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.545867   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.545913   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.038882   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:31.038908   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.038945   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.038957   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.042512   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:31.042527   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.042534   29027 round_trippers.go:580]     Audit-Id: 531cc090-20f7-410d-8f74-4d55ac670997
	I0906 15:14:31.042540   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.042546   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.042552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.042557   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.042563   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.042646   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:31.043051   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:31.043059   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.043069   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.043078   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.044937   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:31.044947   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.044952   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.044957   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.044962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.044966   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.044971   29027 round_trippers.go:580]     Audit-Id: 07e71ef2-6658-4426-8df1-efeb67d89052
	I0906 15:14:31.044975   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.045020   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.537964   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:31.537980   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.537989   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.537996   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.541432   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:31.541445   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.541450   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.541455   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.541459   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.541463   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.541468   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.541473   29027 round_trippers.go:580]     Audit-Id: c0e9e5ed-9091-4fb8-9cda-6942653c6955
	I0906 15:14:31.541538   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:31.541830   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:31.541837   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.541842   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.541847   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.543838   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:31.543848   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.543853   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.543861   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.543866   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.543871   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.543876   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.543881   29027 round_trippers.go:580]     Audit-Id: 96f5dda0-c486-4bd4-ae60-b2d0873ecf41
	I0906 15:14:31.543926   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.544105   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:32.039898   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:32.039921   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.039932   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.039952   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.043672   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:32.043694   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.043706   29027 round_trippers.go:580]     Audit-Id: 56e1715f-478d-458a-ace0-c8ce280ce079
	I0906 15:14:32.043716   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.043730   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.043742   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.043748   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.043755   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.043954   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:32.044344   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:32.044353   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.044361   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.044369   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.046094   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:32.046103   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.046108   29027 round_trippers.go:580]     Audit-Id: 8c441bcd-0e21-419d-9c94-34a075cc5693
	I0906 15:14:32.046115   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.046121   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.046125   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.046130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.046135   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.046286   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:32.539953   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:32.539974   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.539986   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.539997   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.543962   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:32.543984   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.543992   29027 round_trippers.go:580]     Audit-Id: 165ecdbf-3f6c-453f-9aef-28fecb40db00
	I0906 15:14:32.543999   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.544006   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.544012   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.544019   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.544026   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.544106   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:32.544464   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:32.544470   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.544476   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.544481   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.546569   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:32.546579   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.546586   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.546591   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.546596   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.546600   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.546605   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.546609   29027 round_trippers.go:580]     Audit-Id: 1f744d63-e525-4fac-a473-05ab78dc9ebb
	I0906 15:14:32.546652   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.037823   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:33.037847   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.037858   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.037868   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.041330   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:33.041340   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.041346   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.041351   29027 round_trippers.go:580]     Audit-Id: f27f0a43-4d50-4f38-b610-9d8afaa6dc95
	I0906 15:14:33.041357   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.041361   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.041366   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.041373   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.041527   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:33.041819   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:33.041825   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.041831   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.041836   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.043735   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:33.043757   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.043768   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.043775   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.043782   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.043786   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.043791   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.043795   29027 round_trippers.go:580]     Audit-Id: 8d0a829a-ce1a-4890-ac17-599ce13dd5ec
	I0906 15:14:33.043998   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.539975   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:33.539996   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.540009   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.540020   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.544191   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:33.544206   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.544217   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.544224   29027 round_trippers.go:580]     Audit-Id: f7b8b39c-8bb5-4ece-9469-310c608b0dd7
	I0906 15:14:33.544232   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.544238   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.544244   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.544250   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.544320   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:33.544676   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:33.544682   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.544688   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.544693   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.546731   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:33.546740   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.546745   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.546750   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.546759   29027 round_trippers.go:580]     Audit-Id: ce54d76a-a089-4b31-89f8-97bf10d4a501
	I0906 15:14:33.546764   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.546770   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.546775   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.546821   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.547002   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:34.037906   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:34.037929   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.037940   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.037975   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.041313   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:34.041328   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.041337   29027 round_trippers.go:580]     Audit-Id: 64d38398-62c9-4bce-ae7f-bd85c6b65d1b
	I0906 15:14:34.041345   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.041356   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.041367   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.041374   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.041380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.041452   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:34.041762   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:34.041768   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.041774   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.041780   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.043774   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:34.043785   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.043794   29027 round_trippers.go:580]     Audit-Id: 2288ac3d-0a98-46e0-87f6-9285f21857c4
	I0906 15:14:34.043800   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.043806   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.043814   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.043821   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.043827   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.043888   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:34.538926   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:34.538950   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.538967   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.538978   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.542704   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:34.542721   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.542729   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.542735   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.542745   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.542752   29027 round_trippers.go:580]     Audit-Id: 37e38ea4-93a1-49ae-b3bc-6b9b0253c7ca
	I0906 15:14:34.542762   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.542771   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.542874   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:34.543261   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:34.543268   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.543274   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.543279   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.545298   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:34.545307   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.545312   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.545317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.545321   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.545325   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.545330   29027 round_trippers.go:580]     Audit-Id: 5330250b-f95d-4585-934d-10877175d093
	I0906 15:14:34.545334   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.545514   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:35.038284   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:35.038337   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.038351   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.038362   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.041757   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:35.041779   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.041797   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.041817   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.041833   29027 round_trippers.go:580]     Audit-Id: 71a7a496-0c0c-4257-8d1f-ac70a35f0b6f
	I0906 15:14:35.041851   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.041863   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.041869   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.042252   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:35.042629   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:35.042636   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.042641   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.042647   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.044460   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:35.044469   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.044475   29027 round_trippers.go:580]     Audit-Id: 75fe9223-54ba-4c47-8655-686d1120cbc3
	I0906 15:14:35.044489   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.044493   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.044498   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.044503   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.044508   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.044556   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:35.537914   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:35.537934   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.537946   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.537956   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.541911   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:35.541928   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.541935   29027 round_trippers.go:580]     Audit-Id: f6fa57a3-5c2c-4efc-b352-52d572e2ad19
	I0906 15:14:35.541941   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.541947   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.541953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.541962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.541967   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.542037   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:35.542415   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:35.542422   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.542428   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.542432   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.544602   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:35.544612   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.544617   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.544622   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.544627   29027 round_trippers.go:580]     Audit-Id: 43ee3f60-e708-4124-8b0d-65c3e347f70d
	I0906 15:14:35.544631   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.544637   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.544641   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.544692   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:36.038225   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:36.038249   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.038262   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.038272   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.042208   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:36.042225   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.042234   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.042240   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.042253   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.042261   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.042272   29027 round_trippers.go:580]     Audit-Id: 39835689-2678-4348-8e0e-95c64b867026
	I0906 15:14:36.042279   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.042503   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:36.042883   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:36.042891   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.042899   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.042906   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.044728   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:36.044737   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.044742   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.044747   29027 round_trippers.go:580]     Audit-Id: 51647f90-6eee-4d8c-bc94-a2f56030963d
	I0906 15:14:36.044752   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.044756   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.044761   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.044765   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.044811   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:36.044993   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:36.537858   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:36.537875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.537884   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.537891   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.541222   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:36.541235   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.541240   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.541244   29027 round_trippers.go:580]     Audit-Id: c3c67978-7f88-4830-9e72-5920158633b7
	I0906 15:14:36.541249   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.541253   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.541257   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.541261   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.541320   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:36.541615   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:36.541623   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.541628   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.541634   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.543498   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:36.543508   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.543513   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.543518   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.543523   29027 round_trippers.go:580]     Audit-Id: 7d60adcb-a0b6-4999-a282-858c60316741
	I0906 15:14:36.543527   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.543532   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.543536   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.543583   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:37.037823   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:37.037842   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.037851   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.037857   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.040773   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:37.040784   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.040790   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.040795   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.040800   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.040804   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.040809   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.040814   29027 round_trippers.go:580]     Audit-Id: 1a2f86e5-2441-4fd0-8195-d2019133953c
	I0906 15:14:37.040879   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:37.041169   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:37.041175   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.041181   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.041186   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.042846   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:37.042859   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.042864   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.042870   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.042874   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.042879   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.042885   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.042890   29027 round_trippers.go:580]     Audit-Id: 1fae7b40-f9cd-4ab2-962b-dd4272cf6f2d
	I0906 15:14:37.043113   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:37.538160   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:37.538182   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.538195   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.538205   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.542175   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:37.542188   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.542195   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.542199   29027 round_trippers.go:580]     Audit-Id: d783c506-c421-44e4-9617-88081656fce3
	I0906 15:14:37.542204   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.542210   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.542217   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.542222   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.542284   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:37.542579   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:37.542586   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.542591   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.542596   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.544454   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:37.544467   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.544474   29027 round_trippers.go:580]     Audit-Id: ccc74424-d582-447d-a61c-d611efb0fe29
	I0906 15:14:37.544479   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.544483   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.544487   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.544507   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.544514   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.544715   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.037933   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:38.037955   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.037968   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.037978   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.041596   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:38.041612   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.041621   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.041628   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.041634   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.041641   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.041647   29027 round_trippers.go:580]     Audit-Id: 028fce43-cdf1-41f1-bce7-8c69600f8ca0
	I0906 15:14:38.041662   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.042180   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:38.042480   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:38.042486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.042492   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.042497   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.044188   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:38.044197   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.044202   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.044207   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.044212   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.044217   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.044221   29027 round_trippers.go:580]     Audit-Id: 5bac9e20-820b-4f53-812d-f9c243cf0ace
	I0906 15:14:38.044226   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.044269   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.539955   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:38.539976   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.539989   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.540000   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.543737   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:38.543753   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.543761   29027 round_trippers.go:580]     Audit-Id: ebad1ffd-9450-454f-a47c-bd82c8be7ada
	I0906 15:14:38.543767   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.543773   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.543786   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.543794   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.543800   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.543877   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:38.544269   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:38.544277   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.544285   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.544292   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.546174   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:38.546183   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.546189   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.546196   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.546201   29027 round_trippers.go:580]     Audit-Id: d7588191-65ef-4132-8017-128edb8db051
	I0906 15:14:38.546205   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.546210   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.546214   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.546262   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.546451   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:39.039549   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:39.039575   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.039587   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.039596   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.043236   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:39.043253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.043261   29027 round_trippers.go:580]     Audit-Id: 218150fa-2c5e-47d6-94fe-667af2066226
	I0906 15:14:39.043268   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.043275   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.043282   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.043292   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.043299   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.043381   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:39.043778   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:39.043786   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.043792   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.043797   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.045877   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:39.045886   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.045891   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.045897   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.045902   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.045906   29027 round_trippers.go:580]     Audit-Id: 530c6494-9b9a-4121-9f3d-6191debd34d8
	I0906 15:14:39.045911   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.045916   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.045968   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:39.537933   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:39.537958   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.537970   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.537980   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.541596   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:39.541611   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.541620   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.541629   29027 round_trippers.go:580]     Audit-Id: dd0f8089-7d7c-4ba5-b6f5-47307c574ba0
	I0906 15:14:39.541636   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.541641   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.541649   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.541655   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.541735   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:39.542064   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:39.542070   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.542076   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.542081   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.544091   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:39.544100   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.544105   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.544110   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.544115   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.544120   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.544126   29027 round_trippers.go:580]     Audit-Id: 8e21543b-8e1c-40ad-9b9f-e049205354ed
	I0906 15:14:39.544134   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.544189   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:40.039807   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:40.039822   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.039839   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.039846   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.042204   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.042214   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.042220   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.042225   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.042230   29027 round_trippers.go:580]     Audit-Id: 341d1a87-53a8-4ba0-b93f-83e7be8dd858
	I0906 15:14:40.042234   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.042239   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.042244   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.042301   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:40.042607   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:40.042613   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.042620   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.042625   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.044831   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.044844   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.044852   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.044859   29027 round_trippers.go:580]     Audit-Id: ca9fab87-5d83-4b07-8236-fcdc7c0609fd
	I0906 15:14:40.044866   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.044874   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.044880   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.044912   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.045255   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:40.537997   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:40.538025   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.538062   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.538085   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.542035   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:40.542046   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.542052   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.542059   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.542070   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.542075   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.542080   29027 round_trippers.go:580]     Audit-Id: 869bac44-2821-431a-8551-4026d49dabdf
	I0906 15:14:40.542099   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.542222   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:40.542507   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:40.542513   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.542518   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.542524   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.544556   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.544565   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.544571   29027 round_trippers.go:580]     Audit-Id: ed80ec78-8c1a-4b2c-9179-1ce76f9dffe8
	I0906 15:14:40.544576   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.544581   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.544585   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.544590   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.544596   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.544636   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:41.038740   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:41.038761   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.038773   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.038782   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.042020   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:41.042035   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.042046   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.042055   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.042062   29027 round_trippers.go:580]     Audit-Id: 4fa68311-2376-4028-a0e4-56aa10a3f1b3
	I0906 15:14:41.042070   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.042074   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.042080   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.042235   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:41.042526   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:41.042534   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.042539   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.042544   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.044522   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:41.044531   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.044537   29027 round_trippers.go:580]     Audit-Id: 5f68ac15-3524-4efb-bdd9-6ebf142802f1
	I0906 15:14:41.044542   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.044547   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.044552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.044557   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.044562   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.044605   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:41.044784   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:41.538066   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:41.538085   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.538097   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.538106   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.541536   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:41.541545   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.541550   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.541555   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.541560   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.541565   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.541569   29027 round_trippers.go:580]     Audit-Id: b2f8bca3-69d0-4895-8527-95bff029cb9a
	I0906 15:14:41.541574   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.541625   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:41.541898   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:41.541907   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.541913   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.541926   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.543867   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:41.543875   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.543880   29027 round_trippers.go:580]     Audit-Id: 4f02517e-1c8e-4c49-9953-7d91575fcd36
	I0906 15:14:41.543890   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.543894   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.543899   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.543903   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.543909   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.544211   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:42.039941   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:42.039966   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.040001   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.040013   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.043791   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:42.043807   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.043814   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.043822   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.043830   29027 round_trippers.go:580]     Audit-Id: d2389df5-8b8c-41e2-8c7a-57ed0fdb8ef0
	I0906 15:14:42.043835   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.043841   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.043848   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.043927   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:42.044304   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:42.044311   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.044316   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.044323   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.046372   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:42.046380   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.046385   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.046390   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.046395   29027 round_trippers.go:580]     Audit-Id: 7473e3da-0e35-4a03-876d-65c4fadc059a
	I0906 15:14:42.046400   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.046405   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.046409   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.046453   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:42.538440   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:42.538455   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.538463   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.538470   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.541381   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:42.541392   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.541397   29027 round_trippers.go:580]     Audit-Id: 23c5e34c-3f80-4500-aa22-855b4ce316a1
	I0906 15:14:42.541404   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.541411   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.541424   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.541429   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.541434   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.541516   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:42.541809   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:42.541815   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.541821   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.541827   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.543774   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:42.543782   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.543787   29027 round_trippers.go:580]     Audit-Id: 2c954877-9aa0-4dba-a851-587df6694bb0
	I0906 15:14:42.543791   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.543796   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.543801   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.543806   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.543811   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.543872   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.038061   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:43.038077   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.038085   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.038092   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.041143   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.041155   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.041161   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.041166   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.041171   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.041175   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.041180   29027 round_trippers.go:580]     Audit-Id: 8865d532-192a-452e-80b3-da9b88a2ad14
	I0906 15:14:43.041186   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.041241   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6565 chars]
	I0906 15:14:43.041528   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.041535   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.041540   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.041546   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.043233   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.043243   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.043248   29027 round_trippers.go:580]     Audit-Id: fbbc047a-af02-44fd-82d8-f037f2af8273
	I0906 15:14:43.043253   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.043258   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.043262   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.043267   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.043272   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.043312   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.043488   29027 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.043497   29027 pod_ready.go:81] duration metric: took 27.691382399s waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.043504   29027 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.043531   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:43.043535   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.043540   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.043546   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.045244   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.045253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.045259   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.045264   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.045269   29027 round_trippers.go:580]     Audit-Id: 3bebc7b1-2f53-4b66-b82e-12cd21a2e08a
	I0906 15:14:43.045274   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.045278   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.045286   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.045331   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1107","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6114 chars]
	I0906 15:14:43.045542   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.045548   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.045553   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.045558   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.047415   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.047423   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.047429   29027 round_trippers.go:580]     Audit-Id: 4a2cf001-0c50-42d5-809f-4006bfcd5a30
	I0906 15:14:43.047434   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.047439   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.047446   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.047451   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.047455   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.047513   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.047689   29027 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.047696   29027 pod_ready.go:81] duration metric: took 4.186455ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.047711   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.047737   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:43.047741   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.047746   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.047752   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.049532   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.049541   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.049547   29027 round_trippers.go:580]     Audit-Id: f227be37-84e5-469b-b9b4-166bdc35fec8
	I0906 15:14:43.049552   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.049557   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.049563   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.049568   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.049573   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.049633   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1113","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8470 chars]
	I0906 15:14:43.049884   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.049890   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.049895   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.049900   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.051758   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.051766   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.051771   29027 round_trippers.go:580]     Audit-Id: ae16ea01-400a-4582-9051-668d3bea4818
	I0906 15:14:43.051776   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.051781   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.051785   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.051790   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.051795   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.051833   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.052006   29027 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.052012   29027 pod_ready.go:81] duration metric: took 4.295043ms waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.052018   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.052044   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:43.052048   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.052053   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.052058   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.053747   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.053756   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.053762   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.053767   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.053772   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.053777   29027 round_trippers.go:580]     Audit-Id: f1e8ef5c-50e0-4e8c-85a2-65960e0be433
	I0906 15:14:43.053783   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.053787   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.053849   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1120","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8045 chars]
	I0906 15:14:43.054106   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.054113   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.054118   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.054123   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.055985   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.055994   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.055999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.056005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.056009   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.056014   29027 round_trippers.go:580]     Audit-Id: 5b58e4a3-da2f-4fef-addc-022f3a7e7cd7
	I0906 15:14:43.056019   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.056024   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.056062   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.056232   29027 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.056238   29027 pod_ready.go:81] duration metric: took 4.215573ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.056243   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.056267   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:14:43.056270   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.056276   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.056281   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.057908   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.057917   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.057922   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.057927   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.057931   29027 round_trippers.go:580]     Audit-Id: 9a92c8bb-6a53-4c05-96ca-eb1282ce2a3d
	I0906 15:14:43.057936   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.057940   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.057945   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.057983   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"887","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5997 chars]
	I0906 15:14:43.058217   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:14:43.058222   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.058228   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.058234   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.059698   29027 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0906 15:14:43.059707   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.059712   29027 round_trippers.go:580]     Audit-Id: 43e2cfff-ad35-436c-8e57-c315e7da8720
	I0906 15:14:43.059717   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.059722   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.059728   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.059733   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.059738   29027 round_trippers.go:580]     Content-Length: 238
	I0906 15:14:43.059742   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.059752   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-20220906150606-22187-m03\" not found","reason":"NotFound","details":{"name":"multinode-20220906150606-22187-m03","kind":"nodes"},"code":404}
	I0906 15:14:43.059790   29027 pod_ready.go:97] node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:43.059796   29027 pod_ready.go:81] duration metric: took 3.54821ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	E0906 15:14:43.059801   29027 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:43.059805   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.238308   29027 request.go:533] Waited for 178.471314ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:43.238364   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:43.238370   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.238379   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.238387   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.241058   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:43.241068   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.241073   29027 round_trippers.go:580]     Audit-Id: 7e638391-714a-4e9c-917f-3c9e5d4ba643
	I0906 15:14:43.241078   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.241083   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.241087   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.241092   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.241097   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.241144   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"1084","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5765 chars]
	I0906 15:14:43.438728   29027 request.go:533] Waited for 197.273813ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.438784   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.438793   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.438827   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.438845   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.442512   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.442523   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.442529   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.442535   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.442539   29027 round_trippers.go:580]     Audit-Id: 2f1b0aa2-f39a-48ce-b6ae-e998cb6dfb48
	I0906 15:14:43.442543   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.442547   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.442552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.442608   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.442797   29027 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.442804   29027 pod_ready.go:81] duration metric: took 382.989962ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.442811   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.639199   29027 request.go:533] Waited for 196.338015ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:43.639280   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:43.639288   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.639315   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.639324   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.642038   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:43.642050   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.642083   29027 round_trippers.go:580]     Audit-Id: 30533f7f-924d-4b97-beda-f06fdc552b35
	I0906 15:14:43.642090   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.642094   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.642099   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.642104   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.642109   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.642156   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"897","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5770 chars]
	I0906 15:14:43.838226   29027 request.go:533] Waited for 195.806474ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:43.838339   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:43.838348   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.838363   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.838373   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.841615   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.841629   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.841636   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.841642   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.841648   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.841653   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.841659   29027 round_trippers.go:580]     Audit-Id: 943a0961-24b2-4ca1-a9c3-ef8109397731
	I0906 15:14:43.841665   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.841867   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a","resourceVersion":"833","creationTimestamp":"2022-09-06T22:10:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":
{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-at [truncated 3821 chars]
	I0906 15:14:43.842080   29027 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.842091   29027 pod_ready.go:81] duration metric: took 399.272423ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.842100   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:44.039245   29027 request.go:533] Waited for 197.100451ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:44.039314   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:44.039319   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.039329   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.039352   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.042376   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:44.042388   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.042394   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.042400   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.042404   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.042409   29027 round_trippers.go:580]     Audit-Id: 01081d96-a7e9-4d3f-8c5c-a95b9156ea94
	I0906 15:14:44.042413   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.042419   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.042473   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1138","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 4927 chars]
	I0906 15:14:44.238080   29027 request.go:533] Waited for 195.37051ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:44.238111   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:44.238116   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.238123   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.238129   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.240543   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:44.240555   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.240560   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.240565   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.240569   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.240574   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.240579   29027 round_trippers.go:580]     Audit-Id: 7f4378fc-cd05-4f9e-8909-d4a7a10a4446
	I0906 15:14:44.240583   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.240634   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:44.241292   29027 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:44.241398   29027 pod_ready.go:81] duration metric: took 399.288758ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:44.241415   29027 pod_ready.go:38] duration metric: took 28.8967773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:44.241436   29027 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:14:44.241498   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:14:44.252777   29027 command_runner.go:130] > 1605
	I0906 15:14:44.253581   29027 api_server.go:71] duration metric: took 29.098018916s to wait for apiserver process to appear ...
	I0906 15:14:44.253594   29027 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:14:44.253601   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:44.258494   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 200:
	ok
	I0906 15:14:44.258522   29027 round_trippers.go:463] GET https://127.0.0.1:57276/version
	I0906 15:14:44.258526   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.258532   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.258538   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.259534   29027 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 15:14:44.259543   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.259549   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.259554   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.259558   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.259563   29027 round_trippers.go:580]     Content-Length: 261
	I0906 15:14:44.259567   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.259572   29027 round_trippers.go:580]     Audit-Id: 21b0a84c-d97e-4539-935f-c58786521315
	I0906 15:14:44.259578   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.259593   29027 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:14:44.259619   29027 api_server.go:140] control plane version: v1.25.0
	I0906 15:14:44.259625   29027 api_server.go:130] duration metric: took 6.026551ms to wait for apiserver health ...
	I0906 15:14:44.259630   29027 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:14:44.438098   29027 request.go:533] Waited for 178.424817ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.438162   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.438171   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.438182   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.438193   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.443150   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:44.443163   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.443170   29027 round_trippers.go:580]     Audit-Id: 4ac8a134-3cef-4e88-a6bc-552819320443
	I0906 15:14:44.443174   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.443180   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.443186   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.443192   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.443196   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.444883   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86207 chars]
	I0906 15:14:44.446738   29027 system_pods.go:59] 12 kube-system pods found
	I0906 15:14:44.446748   29027 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:44.446752   29027 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:14:44.446756   29027 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:44.446759   29027 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:44.446762   29027 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:44.446766   29027 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:44.446770   29027 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:44.446773   29027 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:44.446776   29027 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:44.446780   29027 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:44.446785   29027 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:44.446791   29027 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:14:44.446796   29027 system_pods.go:74] duration metric: took 187.161934ms to wait for pod list to return data ...
	I0906 15:14:44.446801   29027 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:14:44.638323   29027 request.go:533] Waited for 191.446721ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/default/serviceaccounts
	I0906 15:14:44.638429   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/default/serviceaccounts
	I0906 15:14:44.638438   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.638447   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.638459   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.641641   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:44.641654   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.641660   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.641665   29027 round_trippers.go:580]     Audit-Id: eb1f03cd-86cd-4381-b321-768925f237ea
	I0906 15:14:44.641670   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.641674   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.641680   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.641684   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.641688   29027 round_trippers.go:580]     Content-Length: 262
	I0906 15:14:44.641701   29027 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2535e7c3-51eb-44d2-8df8-c188db57dc73","resourceVersion":"310","creationTimestamp":"2022-09-06T22:06:47Z"}}]}
	I0906 15:14:44.641819   29027 default_sa.go:45] found service account: "default"
	I0906 15:14:44.641825   29027 default_sa.go:55] duration metric: took 195.019776ms for default service account to be created ...
	I0906 15:14:44.641830   29027 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:14:44.838043   29027 request.go:533] Waited for 196.177236ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.838100   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.838106   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.838132   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.838144   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.842231   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:44.842241   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.842247   29027 round_trippers.go:580]     Audit-Id: 2cb0098d-6c33-4bcd-981b-da5acd2add2e
	I0906 15:14:44.842252   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.842257   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.842264   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.842268   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.842277   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.843873   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86207 chars]
	I0906 15:14:44.846190   29027 system_pods.go:86] 12 kube-system pods found
	I0906 15:14:44.846203   29027 system_pods.go:89] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:44.846208   29027 system_pods.go:89] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:14:44.846212   29027 system_pods.go:89] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:44.846216   29027 system_pods.go:89] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:44.846219   29027 system_pods.go:89] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:44.846223   29027 system_pods.go:89] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:44.846227   29027 system_pods.go:89] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:44.846232   29027 system_pods.go:89] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:44.846235   29027 system_pods.go:89] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:44.846239   29027 system_pods.go:89] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:44.846257   29027 system_pods.go:89] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:44.846266   29027 system_pods.go:89] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:14:44.846272   29027 system_pods.go:126] duration metric: took 204.437402ms to wait for k8s-apps to be running ...
	I0906 15:14:44.846278   29027 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:14:44.846326   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:14:44.855759   29027 system_svc.go:56] duration metric: took 9.476035ms WaitForService to wait for kubelet.
	I0906 15:14:44.855772   29027 kubeadm.go:573] duration metric: took 29.700208469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:14:44.855788   29027 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:14:45.038429   29027 request.go:533] Waited for 182.602775ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:45.038480   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:45.038486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:45.038494   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:45.038510   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:45.041650   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:45.041662   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:45.041667   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:45.041672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:45.041678   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:45 GMT
	I0906 15:14:45.041683   29027 round_trippers.go:580]     Audit-Id: 7033247f-d261-41dd-8f59-2bddffd0c32f
	I0906 15:14:45.041687   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:45.041692   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:45.041756   29027 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller
-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet"," [truncated 10244 chars]
	I0906 15:14:45.042047   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:45.042056   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:45.042063   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:45.042066   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:45.042069   29027 node_conditions.go:105] duration metric: took 186.276039ms to run NodePressure ...
	I0906 15:14:45.042078   29027 start.go:216] waiting for startup goroutines ...
	I0906 15:14:45.042679   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:45.042742   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.064900   29027 out.go:177] * Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	I0906 15:14:45.086431   29027 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:14:45.107329   29027 out.go:177] * Pulling base image ...
	I0906 15:14:45.149556   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:14:45.149563   29027 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:14:45.149596   29027 cache.go:57] Caching tarball of preloaded images
	I0906 15:14:45.149757   29027 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:14:45.149780   29027 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:14:45.150696   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.213521   29027 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:14:45.213549   29027 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:14:45.213559   29027 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:14:45.213601   29027 start.go:364] acquiring machines lock for multinode-20220906150606-22187-m02: {Name:mk634e5142ae9a72af4ccf4e417277befcfbdc1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:14:45.213679   29027 start.go:368] acquired machines lock for "multinode-20220906150606-22187-m02" in 67.581µs
	I0906 15:14:45.213696   29027 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:14:45.213701   29027 fix.go:55] fixHost starting: m02
	I0906 15:14:45.213937   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:14:45.277559   29027 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187-m02: state=Stopped err=<nil>
	W0906 15:14:45.277580   29027 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:14:45.299176   29027 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	I0906 15:14:45.341287   29027 cli_runner.go:164] Run: docker start multinode-20220906150606-22187-m02
	I0906 15:14:45.685256   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:14:45.750301   29027 kic.go:415] container "multinode-20220906150606-22187-m02" state is running.
	I0906 15:14:45.750888   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:45.818341   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.818831   29027 machine.go:88] provisioning docker machine ...
	I0906 15:14:45.818848   29027 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187-m02"
	I0906 15:14:45.818937   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:45.892246   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:45.892421   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:45.892436   29027 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187-m02 && echo "multinode-20220906150606-22187-m02" | sudo tee /etc/hostname
	I0906 15:14:46.028099   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187-m02
	
	I0906 15:14:46.028170   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.093016   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.093233   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.093255   29027 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:14:46.203928   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:14:46.203950   29027 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:14:46.203966   29027 ubuntu.go:177] setting up certificates
	I0906 15:14:46.203975   29027 provision.go:83] configureAuth start
	I0906 15:14:46.204050   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:46.270597   29027 provision.go:138] copyHostCerts
	I0906 15:14:46.270653   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:14:46.270706   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:14:46.270714   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:14:46.270810   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:14:46.270963   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:14:46.270994   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:14:46.270999   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:14:46.271059   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:14:46.271172   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:14:46.271199   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:14:46.271203   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:14:46.271259   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:14:46.271374   29027 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187-m02]
	I0906 15:14:46.374806   29027 provision.go:172] copyRemoteCerts
	I0906 15:14:46.374879   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:14:46.374958   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.444609   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:46.531684   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:14:46.531748   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:14:46.549421   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:14:46.549491   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0906 15:14:46.566040   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:14:46.566107   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:14:46.583648   29027 provision.go:86] duration metric: configureAuth took 379.661276ms
	I0906 15:14:46.583660   29027 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:14:46.583852   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:46.583909   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.648048   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.648226   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.648237   29027 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:14:46.769847   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:14:46.769862   29027 ubuntu.go:71] root file system type: overlay
	I0906 15:14:46.770004   29027 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:14:46.770082   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.834190   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.834349   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.834414   29027 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:14:46.957738   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:14:46.957823   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.021758   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:47.021919   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:47.021933   29027 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:14:47.137286   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:14:47.137302   29027 machine.go:91] provisioned docker machine in 1.318458177s
	I0906 15:14:47.137308   29027 start.go:300] post-start starting for "multinode-20220906150606-22187-m02" (driver="docker")
	I0906 15:14:47.137314   29027 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:14:47.137368   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:14:47.137412   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.203899   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.286542   29027 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:14:47.289824   29027 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:14:47.289833   29027 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:14:47.289836   29027 command_runner.go:130] > ID=ubuntu
	I0906 15:14:47.289840   29027 command_runner.go:130] > ID_LIKE=debian
	I0906 15:14:47.289843   29027 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:14:47.289847   29027 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:14:47.289851   29027 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:14:47.289855   29027 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:14:47.289859   29027 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:14:47.289875   29027 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:14:47.289879   29027 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:14:47.289882   29027 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:14:47.289925   29027 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:14:47.289936   29027 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:14:47.289947   29027 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:14:47.289952   29027 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:14:47.289958   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:14:47.290073   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:14:47.290204   29027 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:14:47.290212   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:14:47.290353   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:14:47.297939   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:14:47.315017   29027 start.go:303] post-start completed in 177.699595ms
	I0906 15:14:47.315088   29027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:14:47.315180   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.378993   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.462532   29027 command_runner.go:130] > 11%
	I0906 15:14:47.462993   29027 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:14:47.466982   29027 command_runner.go:130] > 50G
	I0906 15:14:47.467253   29027 fix.go:57] fixHost completed within 2.25354269s
	I0906 15:14:47.467264   29027 start.go:83] releasing machines lock for "multinode-20220906150606-22187-m02", held for 2.253569942s
	I0906 15:14:47.467329   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:47.552548   29027 out.go:177] * Found network options:
	I0906 15:14:47.574060   29027 out.go:177]   - NO_PROXY=192.168.58.2
	W0906 15:14:47.595328   29027 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 15:14:47.595374   29027 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 15:14:47.595549   29027 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:14:47.595557   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:14:47.595642   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.595643   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.663985   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.664118   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.788186   29027 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:14:47.791338   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:14:47.805508   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:47.872738   29027 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:14:47.965571   29027 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:14:47.976781   29027 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:14:47.977245   29027 command_runner.go:130] > [Unit]
	I0906 15:14:47.977260   29027 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:14:47.977273   29027 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:14:47.977281   29027 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:14:47.977288   29027 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:14:47.977294   29027 command_runner.go:130] > Wants=network-online.target
	I0906 15:14:47.977303   29027 command_runner.go:130] > Requires=docker.socket
	I0906 15:14:47.977309   29027 command_runner.go:130] > StartLimitBurst=3
	I0906 15:14:47.977312   29027 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:14:47.977315   29027 command_runner.go:130] > [Service]
	I0906 15:14:47.977320   29027 command_runner.go:130] > Type=notify
	I0906 15:14:47.977327   29027 command_runner.go:130] > Restart=on-failure
	I0906 15:14:47.977347   29027 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0906 15:14:47.977360   29027 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:14:47.977374   29027 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:14:47.977387   29027 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:14:47.977405   29027 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:14:47.977415   29027 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:14:47.977423   29027 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:14:47.977433   29027 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:14:47.977442   29027 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:14:47.977450   29027 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:14:47.977454   29027 command_runner.go:130] > ExecStart=
	I0906 15:14:47.977465   29027 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:14:47.977471   29027 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:14:47.977478   29027 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:14:47.977483   29027 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:14:47.977486   29027 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:14:47.977490   29027 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:14:47.977493   29027 command_runner.go:130] > LimitCORE=infinity
	I0906 15:14:47.977499   29027 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:14:47.977504   29027 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:14:47.977507   29027 command_runner.go:130] > TasksMax=infinity
	I0906 15:14:47.977515   29027 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:14:47.977520   29027 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:14:47.977524   29027 command_runner.go:130] > Delegate=yes
	I0906 15:14:47.977534   29027 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:14:47.977541   29027 command_runner.go:130] > KillMode=process
	I0906 15:14:47.977556   29027 command_runner.go:130] > [Install]
	I0906 15:14:47.977572   29027 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:14:47.979109   29027 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:14:47.979154   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:14:47.988414   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:14:48.000289   29027 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:14:48.000300   29027 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:14:48.000953   29027 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:14:48.072271   29027 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:14:48.142544   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:48.205608   29027 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:14:48.432967   29027 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:14:48.498398   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:48.562390   29027 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:14:48.571790   29027 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:14:48.571856   29027 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:14:48.575594   29027 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:14:48.575606   29027 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:14:48.575615   29027 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 130         Links: 1
	I0906 15:14:48.575625   29027 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:14:48.575633   29027 command_runner.go:130] > Access: 2022-09-06 22:14:47.920648036 +0000
	I0906 15:14:48.575638   29027 command_runner.go:130] > Modify: 2022-09-06 22:14:47.892648038 +0000
	I0906 15:14:48.575643   29027 command_runner.go:130] > Change: 2022-09-06 22:14:47.897648038 +0000
	I0906 15:14:48.575646   29027 command_runner.go:130] >  Birth: -
	I0906 15:14:48.575760   29027 start.go:471] Will wait 60s for crictl version
	I0906 15:14:48.575805   29027 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:14:48.601488   29027 command_runner.go:130] > Version:  0.1.0
	I0906 15:14:48.601514   29027 command_runner.go:130] > RuntimeName:  docker
	I0906 15:14:48.601522   29027 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:14:48.601537   29027 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:14:48.603289   29027 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:14:48.603351   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:14:48.636740   29027 command_runner.go:130] > 20.10.17
	I0906 15:14:48.639507   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:14:48.671783   29027 command_runner.go:130] > 20.10.17
	I0906 15:14:48.719080   29027 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:14:48.740144   29027 out.go:177]   - env NO_PROXY=192.168.58.2
	I0906 15:14:48.761288   29027 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187-m02 dig +short host.docker.internal
	I0906 15:14:48.883296   29027 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:14:48.883380   29027 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:14:48.887746   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:14:48.897032   29027 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.3
	I0906 15:14:48.897185   29027 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:14:48.897235   29027 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:14:48.897242   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:14:48.897263   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:14:48.897281   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:14:48.897329   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:14:48.897448   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:14:48.897499   29027 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:14:48.897512   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:14:48.897551   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:14:48.897584   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:14:48.897617   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:14:48.897685   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:14:48.897720   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:14:48.897744   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:48.897761   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:14:48.898077   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:14:48.916608   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:14:48.932692   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:14:48.950240   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:14:48.966949   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:14:48.984250   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:14:49.000698   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:14:49.017011   29027 ssh_runner.go:195] Run: openssl version
	I0906 15:14:49.022038   29027 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:14:49.022238   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:14:49.029845   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033599   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033622   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033662   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.038469   29027 command_runner.go:130] > b5213941
	I0906 15:14:49.038723   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:14:49.045631   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:14:49.053075   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056828   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056844   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056882   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.061737   29027 command_runner.go:130] > 51391683
	I0906 15:14:49.062373   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:14:49.070334   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:14:49.078258   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.081954   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.081973   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.082014   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.086807   29027 command_runner.go:130] > 3ec20f2e
	I0906 15:14:49.087132   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:14:49.094065   29027 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:14:49.165274   29027 command_runner.go:130] > systemd
	I0906 15:14:49.168628   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:14:49.168638   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:14:49.168656   29027 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:14:49.168668   29027 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:14:49.168753   29027 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:14:49.168795   29027 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:14:49.168853   29027 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:14:49.175596   29027 command_runner.go:130] > kubeadm
	I0906 15:14:49.175605   29027 command_runner.go:130] > kubectl
	I0906 15:14:49.175609   29027 command_runner.go:130] > kubelet
	I0906 15:14:49.176427   29027 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:14:49.176477   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0906 15:14:49.183422   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0906 15:14:49.196163   29027 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:14:49.209908   29027 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:14:49.213641   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:14:49.222875   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:49.223063   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:49.223064   29027 start.go:285] JoinCluster: &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false p
od-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:14:49.223130   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 15:14:49.223175   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:49.288153   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:49.429582   29027 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:14:49.434195   29027 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:49.434220   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:49.434452   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0906 15:14:49.434493   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:49.500020   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:49.642697   29027 command_runner.go:130] > node/multinode-20220906150606-22187-m02 cordoned
	I0906 15:14:52.657820   29027 command_runner.go:130] > pod "busybox-65db55d5d6-rqxp8" has DeletionTimestamp older than 1 seconds, skipping
	I0906 15:14:52.657834   29027 command_runner.go:130] > node/multinode-20220906150606-22187-m02 drained
	I0906 15:14:52.661056   29027 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0906 15:14:52.661071   29027 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cddz8, kube-system/kube-proxy-wnrrx
	I0906 15:14:52.661096   29027 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.226609366s)
	I0906 15:14:52.661109   29027 node.go:109] successfully drained node "m02"
	I0906 15:14:52.661411   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:52.661610   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:52.661857   29027 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0906 15:14:52.661883   29027 round_trippers.go:463] DELETE https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:52.661887   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:52.661894   29027 round_trippers.go:473]     Content-Type: application/json
	I0906 15:14:52.661906   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:52.661911   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:52.665187   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:52.665199   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:52.665204   29027 round_trippers.go:580]     Audit-Id: 3457ee85-e95d-4e94-93c5-abb29c0d4891
	I0906 15:14:52.665210   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:52.665215   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:52.665219   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:52.665224   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:52.665231   29027 round_trippers.go:580]     Content-Length: 185
	I0906 15:14:52.665236   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:52 GMT
	I0906 15:14:52.665249   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220906150606-22187-m02","kind":"nodes","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a"}}
	I0906 15:14:52.665267   29027 node.go:125] successfully deleted node "m02"
	I0906 15:14:52.665274   29027 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:52.665286   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:52.665297   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:14:52.698107   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:14:52.799738   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:14:52.799761   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:14:52.827532   29027 command_runner.go:130] ! W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:14:52.827545   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:14:52.827558   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:14:52.827564   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:14:52.827569   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:14:52.827575   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:14:52.827584   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:14:52.827590   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:14:52.827634   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.827644   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:14:52.827655   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:14:52.871345   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:14:52.871365   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.871387   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.871409   29027 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:03.919365   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:03.919443   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:03.956243   29027 command_runner.go:130] ! W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:03.956951   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:03.982039   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:03.986535   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:04.043999   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:04.044014   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:04.070699   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:04.070711   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.073761   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:04.073777   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:04.073784   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:04.073808   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.073825   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:04.073833   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:04.110982   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:04.110995   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.111009   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.111019   29027 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.718841   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:25.718875   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:25.753192   29027 command_runner.go:130] ! W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:25.753207   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:25.776083   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:25.780723   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:25.838052   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:25.838067   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:25.863965   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:25.863984   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.866695   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:25.866706   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:25.866714   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:25.866744   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.866752   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:25.866759   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:25.901532   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:25.901546   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.901560   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.901572   29027 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.104629   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:52.120910   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:52.156464   29027 command_runner.go:130] ! W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:52.156535   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:52.180408   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:52.185678   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:52.244879   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:52.244892   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:52.270041   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:52.270054   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.273029   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:52.273041   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:52.273047   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:52.273074   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.273082   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:52.273090   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:52.310975   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:52.310988   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.311003   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.311015   29027 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:23.961128   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:16:23.961248   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:16:23.997795   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:16:24.094079   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:16:24.094111   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:16:24.111007   29027 command_runner.go:130] ! W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:16:24.111022   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:16:24.111035   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:16:24.111039   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:16:24.111044   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:16:24.111050   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:16:24.111065   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:16:24.111072   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:16:24.111113   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.111123   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:16:24.111133   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:16:24.148203   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:16:24.148216   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.148232   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.148244   29027 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:10.960353   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:17:10.960440   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:17:10.995679   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:17:11.095742   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:17:11.095763   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:17:11.113200   29027 command_runner.go:130] ! W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:17:11.113214   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:17:11.113225   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:17:11.113230   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:17:11.113236   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:17:11.113242   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:17:11.113252   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:17:11.113257   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:17:11.113285   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.113292   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:17:11.113302   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:17:11.152058   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:17:11.152071   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.152085   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.152100   29027 start.go:287] JoinCluster complete in 2m21.928535342s
	I0906 15:17:11.174107   29027 out.go:177] 
	W0906 15:17:11.195219   29027 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:17:11.195252   29027 out.go:239] * 
	* 
	W0906 15:17:11.196016   29027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:17:11.281123   29027 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220906150606-22187
helpers_test.go:235: (dbg) docker inspect multinode-20220906150606-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf",
	        "Created": "2022-09-06T22:06:13.015437812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 96773,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:13:48.36214702Z",
	            "FinishedAt": "2022-09-06T22:13:34.197951446Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/hostname",
	        "HostsPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/hosts",
	        "LogPath": "/var/lib/docker/containers/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf/f96b4439a54b924389c55ab6eb4d6e8d3c2347f4b8106d7c22a3962125895ccf-json.log",
	        "Name": "/multinode-20220906150606-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220906150606-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220906150606-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79698af834265629f9a73822cb28e5c93f4ea8c6b298e4511abd2a389762f014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20220906150606-22187",
	                "Source": "/var/lib/docker/volumes/multinode-20220906150606-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220906150606-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220906150606-22187",
	                "name.minikube.sigs.k8s.io": "multinode-20220906150606-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "081c6172e629d2c1a41dde1968965373837cbe40e6939a251a39d554bee5e652",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57272"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57273"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57274"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57276"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/081c6172e629",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220906150606-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f96b4439a54b",
	                        "multinode-20220906150606-22187"
	                    ],
	                    "NetworkID": "ffe171e224281ce06adeca6944e902dfd3e453d98c2cfc0a549b1b9fef9c84ec",
	                    "EndpointID": "dfbe75f7a36f7e68f65ac0353ad6c078186612f3f65ce81451a338525765c264",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20220906150606-22187 -n multinode-20220906150606-22187
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 logs -n 25: (3.386014551s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| Command |                                                                   Args                                                                   |            Profile             |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 sudo cat                                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m03:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 sudo cat                                                        | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt                                           |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp testdata/cp-test.txt                                                                                   | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                                                              |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187-m03.txt |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 sudo cat                                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt                                            | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | multinode-20220906150606-22187-m02:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220906150606-22187-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 sudo cat                                                        | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt                                           |                                |         |         |                     |                     |
	| node    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:08 PDT |
	|         | node stop m03                                                                                                                            |                                |         |         |                     |                     |
	| node    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:08 PDT | 06 Sep 22 15:09 PDT |
	|         | node start m03                                                                                                                           |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	| stop    | -p                                                                                                                                       | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT | 06 Sep 22 15:09 PDT |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	| start   | -p                                                                                                                                       | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:09 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	|         | --wait=true -v=8                                                                                                                         |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:13 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	| node    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:13 PDT | 06 Sep 22 15:13 PDT |
	|         | node delete m03                                                                                                                          |                                |         |         |                     |                     |
	| stop    | multinode-20220906150606-22187                                                                                                           | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:13 PDT | 06 Sep 22 15:13 PDT |
	|         | stop                                                                                                                                     |                                |         |         |                     |                     |
	| start   | -p                                                                                                                                       | multinode-20220906150606-22187 | jenkins | v1.26.1 | 06 Sep 22 15:13 PDT |                     |
	|         | multinode-20220906150606-22187                                                                                                           |                                |         |         |                     |                     |
	|         | --wait=true -v=8                                                                                                                         |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	|         | --driver=docker                                                                                                                          |                                |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:13:47
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:13:47.095685   29027 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:13:47.095920   29027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:47.095925   29027 out.go:309] Setting ErrFile to fd 2...
	I0906 15:13:47.095929   29027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:47.096053   29027 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:13:47.096485   29027 out.go:303] Setting JSON to false
	I0906 15:13:47.111360   29027 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7998,"bootTime":1662494429,"procs":341,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:13:47.111459   29027 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:13:47.133023   29027 out.go:177] * [multinode-20220906150606-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:13:47.177166   29027 notify.go:193] Checking for updates...
	I0906 15:13:47.198517   29027 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:13:47.219906   29027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:47.241034   29027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:13:47.262917   29027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:13:47.285064   29027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:13:47.307506   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:13:47.308140   29027 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:13:47.376045   29027 docker.go:137] docker version: linux-20.10.17
	I0906 15:13:47.376161   29027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:13:47.505530   29027 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:13:47.440188995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:13:47.549224   29027 out.go:177] * Using the docker driver based on existing profile
	I0906 15:13:47.570151   29027 start.go:284] selected driver: docker
	I0906 15:13:47.570171   29027 start.go:808] validating driver "docker" against &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:47.570305   29027 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:13:47.570446   29027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:13:47.698814   29027 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:13:47.634507209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:13:47.700850   29027 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:13:47.700876   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:13:47.700884   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:13:47.700898   29027 start_flags.go:310] config:
	{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false re
gistry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:47.722711   29027 out.go:177] * Starting control plane node multinode-20220906150606-22187 in cluster multinode-20220906150606-22187
	I0906 15:13:47.765800   29027 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:13:47.787499   29027 out.go:177] * Pulling base image ...
	I0906 15:13:47.831038   29027 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:13:47.831062   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:13:47.831139   29027 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:13:47.831161   29027 cache.go:57] Caching tarball of preloaded images
	I0906 15:13:47.831935   29027 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:13:47.832130   29027 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:13:47.832517   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:13:47.894503   29027 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:13:47.894519   29027 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:13:47.894528   29027 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:13:47.894564   29027 start.go:364] acquiring machines lock for multinode-20220906150606-22187: {Name:mk1f646be94138ec52cb695dba30aa00d55e22df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:13:47.894639   29027 start.go:368] acquired machines lock for "multinode-20220906150606-22187" in 55.567µs
	I0906 15:13:47.894657   29027 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:13:47.894668   29027 fix.go:55] fixHost starting: 
	I0906 15:13:47.894924   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:13:47.957408   29027 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187: state=Stopped err=<nil>
	W0906 15:13:47.957439   29027 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:13:48.001523   29027 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187" ...
	I0906 15:13:48.029853   29027 cli_runner.go:164] Run: docker start multinode-20220906150606-22187
	I0906 15:13:48.361102   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:13:48.425826   29027 kic.go:415] container "multinode-20220906150606-22187" state is running.
	I0906 15:13:48.426466   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:48.491495   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:13:48.491891   29027 machine.go:88] provisioning docker machine ...
	I0906 15:13:48.491915   29027 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187"
	I0906 15:13:48.491973   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:48.558160   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:48.558370   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:48.558383   29027 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187 && echo "multinode-20220906150606-22187" | sudo tee /etc/hostname
	I0906 15:13:48.680446   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187
	
	I0906 15:13:48.680539   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:48.743904   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:48.744058   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:48.744072   29027 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:13:48.853817   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:13:48.853835   29027 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:13:48.853855   29027 ubuntu.go:177] setting up certificates
	I0906 15:13:48.853865   29027 provision.go:83] configureAuth start
	I0906 15:13:48.853930   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:48.919529   29027 provision.go:138] copyHostCerts
	I0906 15:13:48.919578   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:13:48.919647   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:13:48.919659   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:13:48.919763   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:13:48.919932   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:13:48.919965   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:13:48.919969   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:13:48.920055   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:13:48.920752   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:13:48.920870   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:13:48.920877   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:13:48.920974   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:13:48.921152   29027 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187]
	I0906 15:13:49.220973   29027 provision.go:172] copyRemoteCerts
	I0906 15:13:49.221038   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:13:49.221086   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.285942   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:49.367849   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:13:49.367933   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:13:49.386455   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:13:49.386527   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:13:49.403267   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:13:49.403334   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:13:49.419766   29027 provision.go:86] duration metric: configureAuth took 565.884308ms
	I0906 15:13:49.419779   29027 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:13:49.419962   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:13:49.420018   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.483049   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.483249   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.483260   29027 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:13:49.595353   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:13:49.595366   29027 ubuntu.go:71] root file system type: overlay
	I0906 15:13:49.595502   29027 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:13:49.595571   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.658210   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.658397   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.658444   29027 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:13:49.783435   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:13:49.783514   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:49.845990   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:13:49.846143   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57272 <nil> <nil>}
	I0906 15:13:49.846157   29027 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:13:49.965444   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:13:49.965463   29027 machine.go:91] provisioned docker machine in 1.473558658s
	I0906 15:13:49.965472   29027 start.go:300] post-start starting for "multinode-20220906150606-22187" (driver="docker")
	I0906 15:13:49.965478   29027 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:13:49.965540   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:13:49.965593   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.028931   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.110988   29027 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:13:50.114281   29027 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:13:50.114291   29027 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:13:50.114295   29027 command_runner.go:130] > ID=ubuntu
	I0906 15:13:50.114301   29027 command_runner.go:130] > ID_LIKE=debian
	I0906 15:13:50.114307   29027 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:13:50.114310   29027 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:13:50.114319   29027 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:13:50.114323   29027 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:13:50.114329   29027 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:13:50.114339   29027 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:13:50.114351   29027 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:13:50.114361   29027 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:13:50.114441   29027 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:13:50.114454   29027 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:13:50.114482   29027 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:13:50.114493   29027 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:13:50.114502   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:13:50.114610   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:13:50.114757   29027 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:13:50.114763   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:13:50.114906   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:13:50.121433   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:13:50.137804   29027 start.go:303] post-start completed in 172.319981ms
	I0906 15:13:50.137874   29027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:13:50.137923   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.201243   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.282164   29027 command_runner.go:130] > 11%!
	(MISSING)I0906 15:13:50.282237   29027 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:13:50.286159   29027 command_runner.go:130] > 50G
	I0906 15:13:50.286417   29027 fix.go:57] fixHost completed within 2.391743544s
	I0906 15:13:50.286429   29027 start.go:83] releasing machines lock for "multinode-20220906150606-22187", held for 2.39177537s
	I0906 15:13:50.286515   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:13:50.349570   29027 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:13:50.349576   29027 ssh_runner.go:195] Run: systemctl --version
	I0906 15:13:50.349684   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.349707   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:50.416609   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.416976   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:13:50.547839   29027 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:13:50.547876   29027 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0906 15:13:50.547901   29027 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0906 15:13:50.548029   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:13:50.554938   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:13:50.566868   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:50.629432   29027 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:13:50.710880   29027 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:13:50.720066   29027 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:13:50.720359   29027 command_runner.go:130] > [Unit]
	I0906 15:13:50.720371   29027 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:13:50.720378   29027 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:13:50.720391   29027 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:13:50.720403   29027 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:13:50.720412   29027 command_runner.go:130] > Wants=network-online.target
	I0906 15:13:50.720425   29027 command_runner.go:130] > Requires=docker.socket
	I0906 15:13:50.720429   29027 command_runner.go:130] > StartLimitBurst=3
	I0906 15:13:50.720433   29027 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:13:50.720436   29027 command_runner.go:130] > [Service]
	I0906 15:13:50.720439   29027 command_runner.go:130] > Type=notify
	I0906 15:13:50.720442   29027 command_runner.go:130] > Restart=on-failure
	I0906 15:13:50.720448   29027 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:13:50.720456   29027 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:13:50.720462   29027 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:13:50.720468   29027 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:13:50.720473   29027 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:13:50.720479   29027 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:13:50.720485   29027 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:13:50.720492   29027 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:13:50.720500   29027 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:13:50.720510   29027 command_runner.go:130] > ExecStart=
	I0906 15:13:50.720522   29027 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:13:50.720527   29027 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:13:50.720533   29027 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:13:50.720538   29027 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:13:50.720542   29027 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:13:50.720545   29027 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:13:50.720550   29027 command_runner.go:130] > LimitCORE=infinity
	I0906 15:13:50.720555   29027 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:13:50.720559   29027 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:13:50.720562   29027 command_runner.go:130] > TasksMax=infinity
	I0906 15:13:50.720567   29027 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:13:50.720572   29027 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:13:50.720575   29027 command_runner.go:130] > Delegate=yes
	I0906 15:13:50.720580   29027 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:13:50.720584   29027 command_runner.go:130] > KillMode=process
	I0906 15:13:50.720590   29027 command_runner.go:130] > [Install]
	I0906 15:13:50.720594   29027 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:13:50.720923   29027 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:13:50.720976   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:13:50.730262   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:13:50.742550   29027 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:13:50.742561   29027 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:13:50.743677   29027 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:13:50.809868   29027 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:13:50.875192   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:50.937284   29027 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:13:51.189454   29027 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:13:51.260433   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:13:51.323939   29027 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:13:51.333104   29027 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:13:51.333168   29027 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:13:51.336859   29027 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:13:51.336870   29027 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:13:51.336877   29027 command_runner.go:130] > Device: 96h/150d	Inode: 115         Links: 1
	I0906 15:13:51.336885   29027 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:13:51.336891   29027 command_runner.go:130] > Access: 2022-09-06 22:13:50.646119795 +0000
	I0906 15:13:51.336896   29027 command_runner.go:130] > Modify: 2022-09-06 22:13:50.646119795 +0000
	I0906 15:13:51.336903   29027 command_runner.go:130] > Change: 2022-09-06 22:13:50.647119795 +0000
	I0906 15:13:51.336907   29027 command_runner.go:130] >  Birth: -
	I0906 15:13:51.337020   29027 start.go:471] Will wait 60s for crictl version
	I0906 15:13:51.337077   29027 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:13:51.364190   29027 command_runner.go:130] > Version:  0.1.0
	I0906 15:13:51.364337   29027 command_runner.go:130] > RuntimeName:  docker
	I0906 15:13:51.364344   29027 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:13:51.364472   29027 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:13:51.366993   29027 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:13:51.367064   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:13:51.397823   29027 command_runner.go:130] > 20.10.17
	I0906 15:13:51.400874   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:13:51.433433   29027 command_runner.go:130] > 20.10.17
	I0906 15:13:51.480937   29027 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:13:51.481158   29027 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187 dig +short host.docker.internal
	I0906 15:13:51.598181   29027 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:13:51.598330   29027 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:13:51.602397   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:13:51.611602   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:51.674806   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:13:51.674878   29027 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:13:51.701157   29027 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:13:51.701169   29027 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:13:51.701174   29027 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:13:51.701181   29027 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:13:51.701193   29027 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:13:51.701200   29027 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:13:51.701203   29027 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:13:51.701211   29027 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:13:51.701217   29027 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:13:51.701221   29027 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:13:51.701225   29027 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:13:51.704032   29027 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:13:51.704051   29027 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:13:51.704128   29027 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:13:51.730200   29027 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.0
	I0906 15:13:51.730211   29027 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.0
	I0906 15:13:51.730215   29027 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.0
	I0906 15:13:51.730223   29027 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.0
	I0906 15:13:51.730228   29027 command_runner.go:130] > kindest/kindnetd:v20220726-ed811e41
	I0906 15:13:51.730232   29027 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0906 15:13:51.730236   29027 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0906 15:13:51.730240   29027 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0906 15:13:51.730243   29027 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0906 15:13:51.730248   29027 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:13:51.730253   29027 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 15:13:51.733586   29027 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	kindest/kindnetd:v20220726-ed811e41
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 15:13:51.733607   29027 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:13:51.733693   29027 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:13:51.803278   29027 command_runner.go:130] > systemd
	I0906 15:13:51.806890   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:13:51.806902   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:13:51.806921   29027 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:13:51.806934   29027 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:13:51.807040   29027 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:13:51.807114   29027 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:13:51.807170   29027 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:13:51.813790   29027 command_runner.go:130] > kubeadm
	I0906 15:13:51.813800   29027 command_runner.go:130] > kubectl
	I0906 15:13:51.813803   29027 command_runner.go:130] > kubelet
	I0906 15:13:51.814432   29027 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:13:51.814527   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:13:51.821382   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0906 15:13:51.833209   29027 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:13:51.845195   29027 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0906 15:13:51.857186   29027 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:13:51.860715   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:13:51.869878   29027 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.2
	I0906 15:13:51.869982   29027 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:13:51.870031   29027 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:13:51.870126   29027 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.key
	I0906 15:13:51.870187   29027 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key.cee25041
	I0906 15:13:51.870237   29027 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key
	I0906 15:13:51.870244   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 15:13:51.870287   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 15:13:51.870336   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 15:13:51.870363   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 15:13:51.870383   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:13:51.870399   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:13:51.870415   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:13:51.870429   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:13:51.870545   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:13:51.870582   29027 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:13:51.870592   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:13:51.870625   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:13:51.870657   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:13:51.870684   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:13:51.870752   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:13:51.870784   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:51.870805   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:13:51.870821   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:13:51.871321   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:13:51.887724   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:13:51.904266   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:13:51.920215   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:13:51.936427   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:13:51.952340   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:13:51.968656   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:13:51.985074   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:13:52.000954   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:13:52.017880   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:13:52.034083   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:13:52.050447   29027 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:13:52.062561   29027 ssh_runner.go:195] Run: openssl version
	I0906 15:13:52.067337   29027 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:13:52.067693   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:13:52.075788   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079463   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079672   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.079715   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:13:52.084419   29027 command_runner.go:130] > b5213941
	I0906 15:13:52.084606   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:13:52.091387   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:13:52.098819   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123160   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123342   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.123386   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:13:52.128156   29027 command_runner.go:130] > 51391683
	I0906 15:13:52.128441   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:13:52.135351   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:13:52.142955   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146637   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146751   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.146791   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:13:52.151309   29027 command_runner.go:130] > 3ec20f2e
	I0906 15:13:52.151687   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:13:52.158793   29027 kubeadm.go:396] StartCluster: {Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false
pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:13:52.158911   29027 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:13:52.187291   29027 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:13:52.194383   29027 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 15:13:52.194397   29027 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 15:13:52.194408   29027 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 15:13:52.194415   29027 command_runner.go:130] > member
	I0906 15:13:52.194939   29027 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:13:52.194955   29027 kubeadm.go:627] restartCluster start
	I0906 15:13:52.194998   29027 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:13:52.201707   29027 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.201762   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:13:52.264897   29027 kubeconfig.go:116] verify returned: extract IP: "multinode-20220906150606-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:52.264986   29027 kubeconfig.go:127] "multinode-20220906150606-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:13:52.265325   29027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:13:52.265818   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:13:52.266007   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:13:52.266314   29027 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 15:13:52.266483   29027 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:13:52.273980   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.274042   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.281997   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.482091   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.482164   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.491685   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.683241   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.683315   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.692824   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:52.883598   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:52.883667   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:52.893419   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.084051   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.084172   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.093622   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.282125   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.282229   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.292313   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.482484   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.482647   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.492680   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.682947   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.683091   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.692474   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:53.882091   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:53.882186   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:53.892124   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.084054   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.084154   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.093444   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.284139   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.284242   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.294417   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.483389   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.483495   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.493549   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.684140   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.684275   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.694746   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:54.884124   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:54.884258   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:54.894796   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.084123   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.084272   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.094900   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.284217   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.284354   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.294412   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.294422   29027 api_server.go:165] Checking apiserver status ...
	I0906 15:13:55.294464   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:13:55.302400   29027 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.302411   29027 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:13:55.302420   29027 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:13:55.302480   29027 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:13:55.333614   29027 command_runner.go:130] > 167b4a4f3306
	I0906 15:13:55.333630   29027 command_runner.go:130] > 06ab6cf627e8
	I0906 15:13:55.333633   29027 command_runner.go:130] > 803ede092469
	I0906 15:13:55.333636   29027 command_runner.go:130] > e266c748731b
	I0906 15:13:55.333640   29027 command_runner.go:130] > c1eee0e53b49
	I0906 15:13:55.333653   29027 command_runner.go:130] > af277a5518c6
	I0906 15:13:55.333656   29027 command_runner.go:130] > 11d34d183821
	I0906 15:13:55.333660   29027 command_runner.go:130] > 4f1337150041
	I0906 15:13:55.333664   29027 command_runner.go:130] > 7596442e53b5
	I0906 15:13:55.333670   29027 command_runner.go:130] > 4c8a1f372186
	I0906 15:13:55.333673   29027 command_runner.go:130] > 3c8f51d8691c
	I0906 15:13:55.333678   29027 command_runner.go:130] > ef78db90e1cf
	I0906 15:13:55.333681   29027 command_runner.go:130] > 62ca7e8901de
	I0906 15:13:55.333685   29027 command_runner.go:130] > 9456ca1d4c44
	I0906 15:13:55.333688   29027 command_runner.go:130] > 8cecea8208ec
	I0906 15:13:55.333691   29027 command_runner.go:130] > c20d3976c12a
	I0906 15:13:55.333696   29027 command_runner.go:130] > 22c8f9d46178
	I0906 15:13:55.333700   29027 command_runner.go:130] > df0852bc7a51
	I0906 15:13:55.333704   29027 command_runner.go:130] > a34f733a43c2
	I0906 15:13:55.333708   29027 command_runner.go:130] > 3c2093315054
	I0906 15:13:55.333714   29027 command_runner.go:130] > fdc326cd3c6a
	I0906 15:13:55.333717   29027 command_runner.go:130] > 4e3670b1600d
	I0906 15:13:55.333721   29027 command_runner.go:130] > 6bd8b364f108
	I0906 15:13:55.333724   29027 command_runner.go:130] > 6d68f544bf54
	I0906 15:13:55.333728   29027 command_runner.go:130] > a165f2074320
	I0906 15:13:55.333732   29027 command_runner.go:130] > 28bc9837a510
	I0906 15:13:55.333741   29027 command_runner.go:130] > 33a1b253bd37
	I0906 15:13:55.333745   29027 command_runner.go:130] > 0c0974b47f92
	I0906 15:13:55.333748   29027 command_runner.go:130] > c27dff0f48e6
	I0906 15:13:55.333752   29027 command_runner.go:130] > 77d6030ab01b
	I0906 15:13:55.333755   29027 command_runner.go:130] > defb450e84c2
	I0906 15:13:55.336896   29027 docker.go:443] Stopping containers: [167b4a4f3306 06ab6cf627e8 803ede092469 e266c748731b c1eee0e53b49 af277a5518c6 11d34d183821 4f1337150041 7596442e53b5 4c8a1f372186 3c8f51d8691c ef78db90e1cf 62ca7e8901de 9456ca1d4c44 8cecea8208ec c20d3976c12a 22c8f9d46178 df0852bc7a51 a34f733a43c2 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2]
	I0906 15:13:55.336981   29027 ssh_runner.go:195] Run: docker stop 167b4a4f3306 06ab6cf627e8 803ede092469 e266c748731b c1eee0e53b49 af277a5518c6 11d34d183821 4f1337150041 7596442e53b5 4c8a1f372186 3c8f51d8691c ef78db90e1cf 62ca7e8901de 9456ca1d4c44 8cecea8208ec c20d3976c12a 22c8f9d46178 df0852bc7a51 a34f733a43c2 3c2093315054 fdc326cd3c6a 4e3670b1600d 6bd8b364f108 6d68f544bf54 a165f2074320 28bc9837a510 33a1b253bd37 0c0974b47f92 c27dff0f48e6 77d6030ab01b defb450e84c2
	I0906 15:13:55.364165   29027 command_runner.go:130] > 167b4a4f3306
	I0906 15:13:55.364450   29027 command_runner.go:130] > 06ab6cf627e8
	I0906 15:13:55.364458   29027 command_runner.go:130] > 803ede092469
	I0906 15:13:55.364461   29027 command_runner.go:130] > e266c748731b
	I0906 15:13:55.364465   29027 command_runner.go:130] > c1eee0e53b49
	I0906 15:13:55.364468   29027 command_runner.go:130] > af277a5518c6
	I0906 15:13:55.364471   29027 command_runner.go:130] > 11d34d183821
	I0906 15:13:55.364475   29027 command_runner.go:130] > 4f1337150041
	I0906 15:13:55.364479   29027 command_runner.go:130] > 7596442e53b5
	I0906 15:13:55.364482   29027 command_runner.go:130] > 4c8a1f372186
	I0906 15:13:55.364486   29027 command_runner.go:130] > 3c8f51d8691c
	I0906 15:13:55.364492   29027 command_runner.go:130] > ef78db90e1cf
	I0906 15:13:55.364495   29027 command_runner.go:130] > 62ca7e8901de
	I0906 15:13:55.364504   29027 command_runner.go:130] > 9456ca1d4c44
	I0906 15:13:55.364510   29027 command_runner.go:130] > 8cecea8208ec
	I0906 15:13:55.364515   29027 command_runner.go:130] > c20d3976c12a
	I0906 15:13:55.364519   29027 command_runner.go:130] > 22c8f9d46178
	I0906 15:13:55.364522   29027 command_runner.go:130] > df0852bc7a51
	I0906 15:13:55.364525   29027 command_runner.go:130] > a34f733a43c2
	I0906 15:13:55.364531   29027 command_runner.go:130] > 3c2093315054
	I0906 15:13:55.364537   29027 command_runner.go:130] > fdc326cd3c6a
	I0906 15:13:55.364540   29027 command_runner.go:130] > 4e3670b1600d
	I0906 15:13:55.364544   29027 command_runner.go:130] > 6bd8b364f108
	I0906 15:13:55.364547   29027 command_runner.go:130] > 6d68f544bf54
	I0906 15:13:55.364978   29027 command_runner.go:130] > a165f2074320
	I0906 15:13:55.364986   29027 command_runner.go:130] > 28bc9837a510
	I0906 15:13:55.364990   29027 command_runner.go:130] > 33a1b253bd37
	I0906 15:13:55.364995   29027 command_runner.go:130] > 0c0974b47f92
	I0906 15:13:55.364999   29027 command_runner.go:130] > c27dff0f48e6
	I0906 15:13:55.365004   29027 command_runner.go:130] > 77d6030ab01b
	I0906 15:13:55.365007   29027 command_runner.go:130] > defb450e84c2
	I0906 15:13:55.368221   29027 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:13:55.378263   29027 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:13:55.384917   29027 command_runner.go:130] > -rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	I0906 15:13:55.384927   29027 command_runner.go:130] > -rw------- 1 root root 5656 Sep  6 22:10 /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.384936   29027 command_runner.go:130] > -rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	I0906 15:13:55.384959   29027 command_runner.go:130] > -rw------- 1 root root 5604 Sep  6 22:10 /etc/kubernetes/scheduler.conf
	I0906 15:13:55.385523   29027 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Sep  6 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:10 /etc/kubernetes/scheduler.conf
	
	I0906 15:13:55.385577   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:13:55.392403   29027 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:13:55.393074   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:13:55.399259   29027 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0906 15:13:55.399913   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.406715   29027 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.406761   29027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:13:55.413337   29027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:13:55.420558   29027 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:13:55.420599   29027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:13:55.427219   29027 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:13:55.434385   29027 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:13:55.434398   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:55.474051   29027 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:13:55.474063   29027 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 15:13:55.474306   29027 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 15:13:55.474317   29027 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:13:55.474530   29027 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 15:13:55.474538   29027 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:13:55.474888   29027 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 15:13:55.474903   29027 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 15:13:55.475057   29027 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:13:55.475465   29027 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:13:55.475573   29027 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:13:55.475580   29027 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 15:13:55.478725   29027 command_runner.go:130] ! W0906 22:13:55.482272    1138 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:55.478756   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:55.519887   29027 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:13:56.065961   29027 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0906 15:13:56.233494   29027 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0906 15:13:56.423102   29027 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:13:56.548408   29027 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:13:56.552135   29027 command_runner.go:130] ! W0906 22:13:55.528641    1147 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.552153   29027 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073381262s)
	I0906 15:13:56.552165   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.600132   29027 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:13:56.600874   29027 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:13:56.601026   29027 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 15:13:56.673420   29027 command_runner.go:130] ! W0906 22:13:56.599919    1169 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.673439   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.716258   29027 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:13:56.716276   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:13:56.718143   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:13:56.719057   29027 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:13:56.725026   29027 command_runner.go:130] ! W0906 22:13:56.725055    1203 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.725048   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:13:56.778974   29027 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:13:56.789205   29027 command_runner.go:130] ! W0906 22:13:56.786358    1217 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:13:56.789244   29027 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:13:56.789298   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.342880   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.843525   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:13:57.852435   29027 command_runner.go:130] > 1605
	I0906 15:13:57.853425   29027 api_server.go:71] duration metric: took 1.064190026s to wait for apiserver process to appear ...
	I0906 15:13:57.853437   29027 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:13:57.853448   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:01.705131   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:14:01.705147   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:14:02.205231   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:02.211661   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:14:02.211675   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:14:02.705671   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:02.711757   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:14:02.711779   29027 api_server.go:102] status: https://127.0.0.1:57276/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:14:03.206152   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:03.214466   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 200:
	ok
	I0906 15:14:03.214521   29027 round_trippers.go:463] GET https://127.0.0.1:57276/version
	I0906 15:14:03.214526   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.214534   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.214540   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.221095   29027 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:14:03.221107   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.221114   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.221122   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.221127   29027 round_trippers.go:580]     Content-Length: 261
	I0906 15:14:03.221132   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.221138   29027 round_trippers.go:580]     Audit-Id: 394126c1-447e-4f2c-b3b9-ac7650fc2135
	I0906 15:14:03.221144   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.221149   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.221170   29027 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:14:03.221218   29027 api_server.go:140] control plane version: v1.25.0
	I0906 15:14:03.221226   29027 api_server.go:130] duration metric: took 5.367765183s to wait for apiserver health ...
	I0906 15:14:03.221237   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:14:03.221243   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:14:03.242928   29027 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 15:14:03.279764   29027 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 15:14:03.285064   29027 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 15:14:03.285078   29027 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0906 15:14:03.285083   29027 command_runner.go:130] > Device: 8eh/142d	Inode: 267134      Links: 1
	I0906 15:14:03.285088   29027 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 15:14:03.285103   29027 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:14:03.285110   29027 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0906 15:14:03.285116   29027 command_runner.go:130] > Change: 2022-09-06 21:44:51.197359839 +0000
	I0906 15:14:03.285123   29027 command_runner.go:130] >  Birth: -
	I0906 15:14:03.285202   29027 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.0/kubectl ...
	I0906 15:14:03.285211   29027 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0906 15:14:03.297958   29027 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 15:14:03.802620   29027 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:14:03.804391   29027 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 15:14:03.806424   29027 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 15:14:03.841320   29027 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 15:14:03.848638   29027 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:14:03.848700   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:03.848705   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.848711   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.848718   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.854350   29027 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 15:14:03.854372   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.854379   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.854385   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.854390   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.854398   29027 round_trippers.go:580]     Audit-Id: d43264b1-eef2-4164-ae3a-dec4b356f994
	I0906 15:14:03.854407   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.854424   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.855456   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1056"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86013 chars]
	I0906 15:14:03.858422   29027 system_pods.go:59] 12 kube-system pods found
	I0906 15:14:03.858438   29027 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:03.858446   29027 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:14:03.858453   29027 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:03.858456   29027 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:03.858460   29027 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:03.858464   29027 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:03.858468   29027 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:03.858473   29027 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:03.858482   29027 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:03.858486   29027 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:03.858490   29027 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:03.858494   29027 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running
	I0906 15:14:03.858497   29027 system_pods.go:74] duration metric: took 9.849597ms to wait for pod list to return data ...
	I0906 15:14:03.858503   29027 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:14:03.858540   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:03.858544   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:03.858549   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:03.858555   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:03.861168   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:03.861178   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:03.861183   29027 round_trippers.go:580]     Audit-Id: e2293256-7675-4afa-a553-5718bf29a84f
	I0906 15:14:03.861188   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:03.861196   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:03.861202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:03.861206   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:03.861211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:03 GMT
	I0906 15:14:03.861384   29027 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1056"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller
-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet"," [truncated 10244 chars]
	I0906 15:14:03.861868   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:03.861880   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:03.861890   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:03.861895   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:03.861900   29027 node_conditions.go:105] duration metric: took 3.393101ms to run NodePressure ...
	I0906 15:14:03.861911   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:14:04.044137   29027 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 15:14:04.162178   29027 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 15:14:04.167024   29027 command_runner.go:130] ! W0906 22:14:03.936156    2008 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:14:04.167051   29027 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:14:04.167118   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0906 15:14:04.167124   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.167130   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.167137   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.170493   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:04.170511   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.170519   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.170527   29027 round_trippers.go:580]     Audit-Id: 1b6ca87a-be5e-49eb-bdd3-214ed5730a44
	I0906 15:14:04.170535   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.170542   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.170548   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.170558   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.171594   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1059"},"items":[{"metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.ad [truncated 30814 chars]
	I0906 15:14:04.172338   29027 kubeadm.go:778] kubelet initialised
	I0906 15:14:04.172348   29027 kubeadm.go:779] duration metric: took 5.289193ms waiting for restarted kubelet to initialise ...
	I0906 15:14:04.172357   29027 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:04.172395   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:04.172400   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.172406   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.172411   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.176893   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:04.176909   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.176917   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.176923   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.176930   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.176936   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.176943   29027 round_trippers.go:580]     Audit-Id: 15c57f50-e47e-453d-a3bb-c43aaea62e45
	I0906 15:14:04.176950   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.178636   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1059"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86013 chars]
	I0906 15:14:04.180813   29027 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.180880   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:04.180885   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.180903   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.180912   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.183296   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.183310   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.183317   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.183324   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.183332   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.183339   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.183348   29027 round_trippers.go:580]     Audit-Id: 81dc3ec3-9a23-48d7-a9ff-df452ef2b16e
	I0906 15:14:04.183355   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.183440   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"801","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6564 chars]
	I0906 15:14:04.183798   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.183805   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.183811   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.183817   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.185944   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.185958   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.185965   29027 round_trippers.go:580]     Audit-Id: df875065-edc3-4abf-889a-b6d91ad53a97
	I0906 15:14:04.185972   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.185980   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.185986   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.185991   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.185995   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.186060   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:04.186278   29027 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:04.186285   29027 pod_ready.go:81] duration metric: took 5.458524ms waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.186293   29027 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:04.186324   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:04.186329   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.186336   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.186344   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.188301   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:04.188312   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.188317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.188322   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.188326   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.188332   29027 round_trippers.go:580]     Audit-Id: 25e0406c-e426-4733-baa2-1347852a18cb
	I0906 15:14:04.188337   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.188342   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.188396   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:04.188640   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.188647   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.188653   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.188657   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.232318   29027 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0906 15:14:04.232390   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.232416   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.232430   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.232441   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.232456   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.232465   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.232474   29027 round_trippers.go:580]     Audit-Id: a7a1d783-f406-4965-a8f0-c5aae1590591
	I0906 15:14:04.232654   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:04.733499   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:04.733511   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.733517   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.733522   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.736552   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.736567   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.736576   29027 round_trippers.go:580]     Audit-Id: 8550cc60-71fb-4b60-9ba7-d34340cfd598
	I0906 15:14:04.736583   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.736590   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.736596   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.736602   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.736609   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.736692   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:04.736978   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:04.736991   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:04.736999   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:04.737005   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:04.739229   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:04.739241   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:04.739250   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:04.739258   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:04 GMT
	I0906 15:14:04.739265   29027 round_trippers.go:580]     Audit-Id: 1ced4632-2232-4cf5-9462-bfbe835be8dc
	I0906 15:14:04.739272   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:04.739281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:04.739288   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:04.739372   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:05.233285   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:05.233296   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.233303   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.233309   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.236135   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:05.236145   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.236151   29027 round_trippers.go:580]     Audit-Id: 2322dcc8-2e64-4601-a36f-f5dae3aeae17
	I0906 15:14:05.236155   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.236160   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.236165   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.236170   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.236174   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.236232   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:05.236477   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:05.236484   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.236491   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.236496   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.238399   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:05.238408   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.238414   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.238418   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.238423   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.238428   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.238432   29027 round_trippers.go:580]     Audit-Id: 23d04807-d8a3-414f-82d2-f903bfa0bc63
	I0906 15:14:05.238438   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.238640   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:05.735359   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:05.735403   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.735416   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.735427   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.738956   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:05.738969   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.738977   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.738983   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.738990   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.738997   29027 round_trippers.go:580]     Audit-Id: ec98a302-1fd4-47c0-b273-b3fe6a6603c4
	I0906 15:14:05.739003   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.739009   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.739092   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:05.739420   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:05.739432   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:05.739442   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:05.739450   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:05.741350   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:05.741358   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:05.741365   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:05.741370   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:05.741375   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:05.741380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:05 GMT
	I0906 15:14:05.741385   29027 round_trippers.go:580]     Audit-Id: 047d62d7-1bfd-453f-aae6-542b222d44ac
	I0906 15:14:05.741390   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:05.741430   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:06.234079   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:06.234096   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.234105   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.234112   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.237352   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:06.237364   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.237370   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.237374   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.237379   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.237383   29027 round_trippers.go:580]     Audit-Id: 1c1d76ff-0b78-4dbb-b574-59fbdb4e5e3b
	I0906 15:14:06.237388   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.237393   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.237457   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:06.237715   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:06.237722   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.237728   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.237733   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.239825   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:06.239836   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.239841   29027 round_trippers.go:580]     Audit-Id: 7e207513-1851-4d43-9610-46fdc37e6ecb
	I0906 15:14:06.239846   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.239851   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.239856   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.239861   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.239865   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.239920   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:06.240109   29027 pod_ready.go:102] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:06.735306   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:06.735330   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.735341   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.735351   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.739873   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:06.739888   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.739896   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.739903   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.739911   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.739916   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.739921   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.739925   29027 round_trippers.go:580]     Audit-Id: f18def18-bffc-46ca-b1e6-22ceb747eabb
	I0906 15:14:06.740008   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:06.740321   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:06.740328   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:06.740337   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:06.740344   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:06.742514   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:06.742525   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:06.742530   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:06.742538   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:06.742544   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:06.742549   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:06 GMT
	I0906 15:14:06.742554   29027 round_trippers.go:580]     Audit-Id: 53ede248-e25f-4928-b951-2e5e9ff24c7b
	I0906 15:14:06.742558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:06.742610   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:07.233387   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:07.233402   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.233411   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.233418   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.236442   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:07.236457   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.236463   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.236467   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.236476   29027 round_trippers.go:580]     Audit-Id: 4bb587db-be41-4aa2-9aa8-dd3faf8713e8
	I0906 15:14:07.236481   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.236490   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.236495   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.236560   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:07.236816   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:07.236823   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.236834   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.236840   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.239087   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:07.239097   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.239103   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.239107   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.239114   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.239119   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.239124   29027 round_trippers.go:580]     Audit-Id: 81b1a4cb-0d7d-49d4-9256-7bbcd2e975d8
	I0906 15:14:07.239129   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.239366   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:07.735230   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:07.735243   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.735249   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.735254   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.737792   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:07.737802   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.737807   29027 round_trippers.go:580]     Audit-Id: 5ce1581c-e11e-4cc0-9ac9-307a4af8d3f7
	I0906 15:14:07.737813   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.737818   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.737822   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.737827   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.737832   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.737881   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:07.738125   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:07.738131   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:07.738136   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:07.738142   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:07.740009   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:07.740019   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:07.740025   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:07.740033   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:07 GMT
	I0906 15:14:07.740038   29027 round_trippers.go:580]     Audit-Id: 1774b55d-df9e-488b-9421-b59b2fa36a34
	I0906 15:14:07.740042   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:07.740047   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:07.740051   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:07.740105   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.233411   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:08.233427   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.233436   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.233443   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.236089   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.236111   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.236120   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.236125   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.236130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.236135   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.236139   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.236145   29027 round_trippers.go:580]     Audit-Id: 6ac90674-6cc7-490b-b447-330e260634ea
	I0906 15:14:08.236206   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:08.236457   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:08.236463   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.236470   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.236477   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.238933   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.238945   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.238953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.238958   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.238968   29027 round_trippers.go:580]     Audit-Id: 6ba70212-6e89-4e32-aa7e-c7be65a66466
	I0906 15:14:08.238980   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.238993   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.239005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.239345   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.735500   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:08.735525   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.735537   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.735547   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.738888   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:08.738903   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.738909   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.738914   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.738921   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.738925   29027 round_trippers.go:580]     Audit-Id: 37138eb0-454a-4f93-a7b9-bf228c9751a3
	I0906 15:14:08.738930   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.738935   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.738993   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:08.739246   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:08.739252   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:08.739258   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:08.739263   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:08.741301   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:08.741310   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:08.741315   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:08.741320   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:08.741325   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:08.741329   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:08.741339   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:08 GMT
	I0906 15:14:08.741346   29027 round_trippers.go:580]     Audit-Id: 742ac44d-07d5-4f5c-9e1f-59beb1191a76
	I0906 15:14:08.741542   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:08.741731   29027 pod_ready.go:102] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:09.234765   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:09.234794   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.234805   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.234813   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.237578   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:09.237594   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.237601   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.237607   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.237611   29027 round_trippers.go:580]     Audit-Id: f4e405d2-cb3b-4637-bdd7-09071cd24b53
	I0906 15:14:09.237617   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.237625   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.237632   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.237709   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:09.237993   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:09.238001   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.238006   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.238011   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.239971   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:09.239980   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.239985   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.239990   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.239995   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.239999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.240004   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.240009   29027 round_trippers.go:580]     Audit-Id: 99276aa5-02b0-498c-a8ea-ab0503bae4dc
	I0906 15:14:09.240049   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:09.735411   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:09.735439   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.735473   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.735485   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.739702   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:09.739723   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.739731   29027 round_trippers.go:580]     Audit-Id: 34c15cf7-8a19-479e-8de2-5c861ff87693
	I0906 15:14:09.739737   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.739744   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.739750   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.739757   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.739762   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.739836   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:09.740172   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:09.740178   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:09.740184   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:09.740189   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:09.742179   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:09.742188   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:09.742196   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:09.742202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:09.742207   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:09.742211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:09 GMT
	I0906 15:14:09.742216   29027 round_trippers.go:580]     Audit-Id: 3641b9d9-1779-4e34-a07f-6f5b2ec0a6bc
	I0906 15:14:09.742220   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:09.742270   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.234698   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:10.234719   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.234731   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.234741   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.238855   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:10.238871   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.238879   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.238883   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.238888   29027 round_trippers.go:580]     Audit-Id: d336bf35-d215-48b7-8aa4-c670535e90a5
	I0906 15:14:10.238894   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.238899   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.238904   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.238959   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1031","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6338 chars]
	I0906 15:14:10.239229   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.239235   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.239240   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.239260   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.241325   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:10.241336   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.241342   29027 round_trippers.go:580]     Audit-Id: 7c834241-fe3c-43eb-b3d1-732009b63e5e
	I0906 15:14:10.241347   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.241351   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.241356   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.241361   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.241368   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.241414   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.734234   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:10.734245   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.734252   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.734257   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.736488   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:10.736499   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.736504   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.736508   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.736512   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.736516   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.736521   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.736527   29027 round_trippers.go:580]     Audit-Id: 9736e5d0-0792-4ffd-b136-ee4ca4a6eaa5
	I0906 15:14:10.736815   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1107","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6114 chars]
	I0906 15:14:10.737068   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.737076   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.737082   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.737089   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.738863   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.738872   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.738878   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.738884   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.738889   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.738893   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.738898   29027 round_trippers.go:580]     Audit-Id: 25f472ed-d603-4820-a3b2-dc72441c2c28
	I0906 15:14:10.738906   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.738959   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:10.739150   29027 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:10.739159   29027 pod_ready.go:81] duration metric: took 6.552837963s waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:10.739172   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:10.739198   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:10.739201   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.739207   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.739213   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.741052   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.741061   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.741066   29027 round_trippers.go:580]     Audit-Id: 583341c9-7cba-4043-a447-96c4817c0ebd
	I0906 15:14:10.741071   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.741076   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.741081   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.741085   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.741090   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.741155   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1081","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8714 chars]
	I0906 15:14:10.741406   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:10.741412   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:10.741418   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:10.741423   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:10.743167   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:10.743176   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:10.743181   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:10.743186   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:10 GMT
	I0906 15:14:10.743191   29027 round_trippers.go:580]     Audit-Id: 3adafee3-0e46-456e-ae91-cffe2411127d
	I0906 15:14:10.743196   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:10.743200   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:10.743205   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:10.743247   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.244021   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:11.244036   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.244044   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.244051   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.247044   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:11.247054   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.247062   29027 round_trippers.go:580]     Audit-Id: 1a1bb9e8-b15c-4922-920f-a72817904e85
	I0906 15:14:11.247067   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.247072   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.247077   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.247081   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.247086   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.247156   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1081","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8714 chars]
	I0906 15:14:11.247428   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.247433   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.247439   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.247444   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.249305   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.249316   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.249321   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.249327   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.249331   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.249336   29027 round_trippers.go:580]     Audit-Id: 40cac01a-055a-4332-baf1-86de31ea6423
	I0906 15:14:11.249341   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.249345   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.249390   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.743647   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:11.743666   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.743678   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.743687   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.747373   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:11.747384   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.747391   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.747397   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.747401   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.747407   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.747413   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.747418   29027 round_trippers.go:580]     Audit-Id: b32ac438-2b62-411d-8c62-05e2edd91996
	I0906 15:14:11.747578   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1113","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8470 chars]
	I0906 15:14:11.747846   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.747853   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.747859   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.747864   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.749818   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.749825   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.749830   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.749835   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.749840   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.749844   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.749849   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.749853   29027 round_trippers.go:580]     Audit-Id: c362f4c1-37b2-4661-8c58-141ce4a41552
	I0906 15:14:11.749894   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:11.750712   29027 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:11.750728   29027 pod_ready.go:81] duration metric: took 1.011545386s waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:11.750750   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:11.750812   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:11.750819   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.750828   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.750836   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.753375   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:11.753385   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.753391   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.753395   29027 round_trippers.go:580]     Audit-Id: 22bd3574-8a51-4a00-8f8f-3c05bbb3c6cc
	I0906 15:14:11.753403   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.753409   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.753413   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.753420   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.753533   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1066","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8307 chars]
	I0906 15:14:11.753795   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:11.753801   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:11.753807   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:11.753812   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:11.755762   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:11.755770   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:11.755775   29027 round_trippers.go:580]     Audit-Id: fdff8381-c2a8-42f0-8e1a-6b64089767ab
	I0906 15:14:11.755780   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:11.755785   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:11.755789   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:11.755795   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:11.755799   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:11 GMT
	I0906 15:14:11.755905   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.256214   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:12.256229   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.256237   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.256246   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.259105   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:12.259119   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.259125   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.259131   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.259136   29027 round_trippers.go:580]     Audit-Id: 5cd8ab23-eafc-49cf-9c5f-bf19977d6843
	I0906 15:14:12.259141   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.259147   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.259153   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.259217   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1066","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8307 chars]
	I0906 15:14:12.259504   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.259511   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.259517   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.259522   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.261371   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.261380   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.261386   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.261391   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.261396   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.261400   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.261405   29027 round_trippers.go:580]     Audit-Id: d19c562b-acc4-47da-adcf-1b6048dc96a6
	I0906 15:14:12.261411   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.261527   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.757163   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:12.757184   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.757196   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.757207   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.760950   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:12.760965   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.760974   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.760981   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.760987   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.760993   29027 round_trippers.go:580]     Audit-Id: 7a13768f-e1d9-4201-ae0f-d2bd87d77e47
	I0906 15:14:12.761000   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.761006   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.761517   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1120","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8045 chars]
	I0906 15:14:12.761825   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.761832   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.761838   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.761843   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.763837   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.763846   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.763851   29027 round_trippers.go:580]     Audit-Id: e2b67775-3ee7-42f0-8209-a63321f1d2d3
	I0906 15:14:12.763857   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.763862   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.763867   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.763872   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.763877   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.763921   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.764099   29027 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.764109   29027 pod_ready.go:81] duration metric: took 1.013346421s waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.764115   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.764138   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:14:12.764142   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.764148   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.764153   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.765860   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.765868   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.765873   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.765878   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.765882   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.765888   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.765893   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.765898   29027 round_trippers.go:580]     Audit-Id: a41b2170-c46f-4c8b-b6a1-104d2c0f333c
	I0906 15:14:12.765938   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"887","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5997 chars]
	I0906 15:14:12.766163   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:14:12.766169   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.766174   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.766179   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.767526   29027 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0906 15:14:12.767534   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.767540   29027 round_trippers.go:580]     Content-Length: 238
	I0906 15:14:12.767545   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.767554   29027 round_trippers.go:580]     Audit-Id: 5904103e-cc22-4bcf-a4b5-faa187929fd1
	I0906 15:14:12.767558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.767564   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.767569   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.767574   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.767590   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-20220906150606-22187-m03\" not found","reason":"NotFound","details":{"name":"multinode-20220906150606-22187-m03","kind":"nodes"},"code":404}
	I0906 15:14:12.767687   29027 pod_ready.go:97] node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:12.767694   29027 pod_ready.go:81] duration metric: took 3.574788ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	E0906 15:14:12.767700   29027 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:12.767705   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.767728   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:12.767732   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.767737   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.767742   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.769469   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.769477   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.769482   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.769488   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.769494   29027 round_trippers.go:580]     Audit-Id: fac150e7-ed19-4eed-ae4b-f04a075beafb
	I0906 15:14:12.769498   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.769503   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.769508   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.769548   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"1084","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5765 chars]
	I0906 15:14:12.769773   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:12.769778   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.769784   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.769789   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.771577   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.771585   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.771590   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.771595   29027 round_trippers.go:580]     Audit-Id: 3ca4ab75-d723-46c6-bb30-aabe54d18d8e
	I0906 15:14:12.771599   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.771604   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.771608   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.771613   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.771646   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:12.771835   29027 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.771841   29027 pod_ready.go:81] duration metric: took 4.131523ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.771847   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.771867   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:12.771871   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.771877   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.771882   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.773648   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.773657   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.773663   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.773668   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.773672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.773678   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.773683   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.773689   29027 round_trippers.go:580]     Audit-Id: def20370-b2ce-40f7-ab5c-1bc5de5d3026
	I0906 15:14:12.773733   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"897","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5770 chars]
	I0906 15:14:12.773957   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:12.773962   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.773968   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.773974   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.775768   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:12.775777   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.775782   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.775787   29027 round_trippers.go:580]     Audit-Id: 63735ea6-d066-498e-9281-7ca90b93844b
	I0906 15:14:12.775792   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.775797   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.775802   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.775808   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.775841   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a","resourceVersion":"833","creationTimestamp":"2022-09-06T22:10:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":
{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-at [truncated 3821 chars]
	I0906 15:14:12.776009   29027 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:12.776016   29027 pod_ready.go:81] duration metric: took 4.165395ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.776021   29027 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:12.936278   29027 request.go:533] Waited for 160.218151ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:12.936317   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:12.936322   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:12.936387   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:12.936400   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:12.939543   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:12.939561   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:12.939568   29027 round_trippers.go:580]     Audit-Id: ef7226ba-b5f8-45ac-908b-ab45390aeb15
	I0906 15:14:12.939574   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:12.939581   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:12.939587   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:12.939594   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:12.939601   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:12 GMT
	I0906 15:14:12.939690   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:13.134425   29027 request.go:533] Waited for 194.450544ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.134479   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.134486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.134497   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.134505   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.137645   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:13.137658   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.137663   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.137668   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.137672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.137677   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.137681   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.137687   29027 round_trippers.go:580]     Audit-Id: 9937fbdd-c523-4f78-9f34-65e8ed352eaa
	I0906 15:14:13.137748   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:13.639233   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:13.639255   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.639267   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.639279   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.643124   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:13.643136   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.643142   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.643146   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.643150   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.643156   29027 round_trippers.go:580]     Audit-Id: e8683bdc-3044-42b4-a149-ec62595f451c
	I0906 15:14:13.643160   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.643165   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.643221   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:13.643445   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:13.643451   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:13.643456   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:13.643462   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:13.644934   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:13.644944   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:13.644951   29027 round_trippers.go:580]     Audit-Id: ade35b5e-2ea8-44cc-ab51-d669e0a6e0f9
	I0906 15:14:13.644958   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:13.644963   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:13.644968   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:13.644973   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:13.644977   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:13 GMT
	I0906 15:14:13.645151   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:14.138082   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:14.138092   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.138099   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.138105   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.140473   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:14.140488   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.140500   29027 round_trippers.go:580]     Audit-Id: 14c5c68f-939b-43bb-97ad-16ac5c611aa7
	I0906 15:14:14.140508   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.140518   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.140528   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.140534   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.140543   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.141099   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:14.141374   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:14.141388   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.141399   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.141413   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.143499   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:14.143511   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.143516   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.143523   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.143529   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.143534   29027 round_trippers.go:580]     Audit-Id: 41f3f7f2-b489-4631-831f-954e40b3fc69
	I0906 15:14:14.143540   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.143544   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.143600   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:14.640166   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:14.640190   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.640203   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.640233   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.644237   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:14.644253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.644261   29027 round_trippers.go:580]     Audit-Id: 764cc717-dbfd-4daf-9df4-08278120bf15
	I0906 15:14:14.644273   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.644281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.644287   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.644293   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.644302   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.644370   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1062","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 5171 chars]
	I0906 15:14:14.644658   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:14.644666   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:14.644674   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:14.644681   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:14.646585   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:14.646594   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:14.646600   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:14.646605   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:14.646610   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:14 GMT
	I0906 15:14:14.646615   29027 round_trippers.go:580]     Audit-Id: 6232b258-1f91-4c99-ad47-9e023cc3fdcb
	I0906 15:14:14.646620   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:14.646624   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:14.646695   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.138136   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:15.138159   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.138171   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.138180   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.141416   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.141426   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.141431   29027 round_trippers.go:580]     Audit-Id: c91885bb-a781-4c37-a0e7-fedf3ecd7299
	I0906 15:14:15.141437   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.141442   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.141447   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.141452   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.141456   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.141503   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1138","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 4927 chars]
	I0906 15:14:15.141719   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.141725   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.141731   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.141736   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.143344   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:15.143353   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.143358   29027 round_trippers.go:580]     Audit-Id: 287d7240-5abf-4b3e-b560-0f0b4edb1602
	I0906 15:14:15.143363   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.143367   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.143372   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.143377   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.143382   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.143723   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.143902   29027 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:15.143911   29027 pod_ready.go:81] duration metric: took 2.367876578s waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:15.143918   29027 pod_ready.go:38] duration metric: took 10.971514597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:15.143932   29027 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:14:15.151523   29027 command_runner.go:130] > -16
	I0906 15:14:15.151552   29027 ops.go:34] apiserver oom_adj: -16
	I0906 15:14:15.151557   29027 kubeadm.go:631] restartCluster took 22.956516772s
	I0906 15:14:15.151563   29027 kubeadm.go:398] StartCluster complete in 22.99269725s
	I0906 15:14:15.151576   29027 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:14:15.151643   29027 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.152033   29027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:14:15.152665   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.152829   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:15.153018   29027 round_trippers.go:463] GET https://127.0.0.1:57276/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 15:14:15.153024   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.153030   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.153035   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.155276   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.155286   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.155291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.155297   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.155302   29027 round_trippers.go:580]     Content-Length: 292
	I0906 15:14:15.155306   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.155315   29027 round_trippers.go:580]     Audit-Id: 566c0fe5-6793-469e-868e-2b5a58149f9a
	I0906 15:14:15.155320   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.155325   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.155338   29027 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a49f3069-8a92-4785-ab5f-7ea0a1721073","resourceVersion":"1132","creationTimestamp":"2022-09-06T22:06:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0906 15:14:15.155416   29027 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220906150606-22187" rescaled to 1
	I0906 15:14:15.155447   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:14:15.155446   29027 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:14:15.155475   29027 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0906 15:14:15.178528   29027 out.go:177] * Verifying Kubernetes components...
	I0906 15:14:15.155604   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:15.178561   29027 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220906150606-22187"
	I0906 15:14:15.178562   29027 addons.go:65] Setting default-storageclass=true in profile "multinode-20220906150606-22187"
	I0906 15:14:15.211248   29027 command_runner.go:130] > apiVersion: v1
	I0906 15:14:15.220319   29027 command_runner.go:130] > data:
	I0906 15:14:15.220331   29027 command_runner.go:130] >   Corefile: |
	I0906 15:14:15.220332   29027 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220906150606-22187"
	I0906 15:14:15.220339   29027 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220906150606-22187"
	I0906 15:14:15.220339   29027 command_runner.go:130] >     .:53 {
	W0906 15:14:15.220347   29027 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:14:15.220349   29027 command_runner.go:130] >         errors
	I0906 15:14:15.220355   29027 command_runner.go:130] >         health {
	I0906 15:14:15.220362   29027 command_runner.go:130] >            lameduck 5s
	I0906 15:14:15.220362   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:14:15.220367   29027 command_runner.go:130] >         }
	I0906 15:14:15.220371   29027 command_runner.go:130] >         ready
	I0906 15:14:15.220380   29027 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0906 15:14:15.220385   29027 command_runner.go:130] >            pods insecure
	I0906 15:14:15.220387   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:15.220393   29027 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0906 15:14:15.220398   29027 command_runner.go:130] >            ttl 30
	I0906 15:14:15.220402   29027 command_runner.go:130] >         }
	I0906 15:14:15.220406   29027 command_runner.go:130] >         prometheus :9153
	I0906 15:14:15.220411   29027 command_runner.go:130] >         hosts {
	I0906 15:14:15.220417   29027 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0906 15:14:15.220423   29027 command_runner.go:130] >            fallthrough
	I0906 15:14:15.220429   29027 command_runner.go:130] >         }
	I0906 15:14:15.220435   29027 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0906 15:14:15.220441   29027 command_runner.go:130] >            max_concurrent 1000
	I0906 15:14:15.220447   29027 command_runner.go:130] >         }
	I0906 15:14:15.220452   29027 command_runner.go:130] >         cache 30
	I0906 15:14:15.220456   29027 command_runner.go:130] >         loop
	I0906 15:14:15.220462   29027 command_runner.go:130] >         reload
	I0906 15:14:15.220469   29027 command_runner.go:130] >         loadbalance
	I0906 15:14:15.220474   29027 command_runner.go:130] >     }
	I0906 15:14:15.220479   29027 command_runner.go:130] > kind: ConfigMap
	I0906 15:14:15.220483   29027 command_runner.go:130] > metadata:
	I0906 15:14:15.220487   29027 command_runner.go:130] >   creationTimestamp: "2022-09-06T22:06:35Z"
	I0906 15:14:15.220490   29027 command_runner.go:130] >   name: coredns
	I0906 15:14:15.220494   29027 command_runner.go:130] >   namespace: kube-system
	I0906 15:14:15.220498   29027 command_runner.go:130] >   resourceVersion: "371"
	I0906 15:14:15.220512   29027 command_runner.go:130] >   uid: 99586de8-1370-4877-aa2d-6bd1c7354337
	I0906 15:14:15.220569   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.220571   29027 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:14:15.220682   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.230847   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.294084   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:15.314614   29027 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:14:15.314904   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:15.335586   29027 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:14:15.335607   29027 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:14:15.335717   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.335882   29027 round_trippers.go:463] GET https://127.0.0.1:57276/apis/storage.k8s.io/v1/storageclasses
	I0906 15:14:15.335896   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.335909   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.335922   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.339595   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.339613   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.339619   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.339623   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.339628   29027 round_trippers.go:580]     Content-Length: 1274
	I0906 15:14:15.339633   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.339637   29027 round_trippers.go:580]     Audit-Id: 9a3b5a9e-1744-4d3f-b2fa-be3c3759ced1
	I0906 15:14:15.339641   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.339645   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.339686   29027 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1138"},"items":[{"metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0906 15:14:15.340070   29027 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:14:15.340105   29027 round_trippers.go:463] PUT https://127.0.0.1:57276/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 15:14:15.340109   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.340115   29027 round_trippers.go:473]     Content-Type: application/json
	I0906 15:14:15.340120   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.340125   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.341941   29027 node_ready.go:35] waiting up to 6m0s for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:14:15.342010   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.342015   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.342021   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.342032   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.344133   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.344142   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:15.344147   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.344159   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.344159   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.344167   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.344173   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.344175   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.344179   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.344186   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.344201   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.344201   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.344207   29027 round_trippers.go:580]     Content-Length: 1220
	I0906 15:14:15.344211   29027 round_trippers.go:580]     Audit-Id: df18436a-8f86-44a9-8a96-b0631ed12e71
	I0906 15:14:15.344213   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.344220   29027 round_trippers.go:580]     Audit-Id: ea4686e1-24c8-4463-90bf-1c9e9df78d3c
	I0906 15:14:15.344226   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.344258   29027 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"130fa9ec-5d5d-4c62-941f-e49f6a02e8a1","resourceVersion":"380","creationTimestamp":"2022-09-06T22:06:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-09-06T22:06:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0906 15:14:15.344287   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.344366   29027 addons.go:153] Setting addon default-storageclass=true in "multinode-20220906150606-22187"
	W0906 15:14:15.344376   29027 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:14:15.344397   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:15.344510   29027 node_ready.go:49] node "multinode-20220906150606-22187" has status "Ready":"True"
	I0906 15:14:15.344518   29027 node_ready.go:38] duration metric: took 2.562ms waiting for node "multinode-20220906150606-22187" to be "Ready" ...
	I0906 15:14:15.344528   29027 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:15.344566   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:15.344574   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.344582   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.344590   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.344796   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:14:15.348943   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:15.348971   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.348980   29027 round_trippers.go:580]     Audit-Id: 1ccba07a-4566-487a-afcc-c75fc472142a
	I0906 15:14:15.348988   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.348996   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.349005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.349011   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.349023   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.349830   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1138"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86023 chars]
	I0906 15:14:15.352004   29027 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:15.352054   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:15.352059   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.352064   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.352070   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.354643   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.354664   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.354691   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.354703   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.354710   29027 round_trippers.go:580]     Audit-Id: 38afbccf-a57a-4cac-8196-924f7e1539ca
	I0906 15:14:15.354716   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.354723   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.354728   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.354803   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:15.403881   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:15.410760   29027 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:14:15.410771   29027 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:14:15.410831   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:15.474843   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:15.493757   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:14:15.534855   29027 request.go:533] Waited for 179.694612ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.534893   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:15.534898   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:15.534905   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:15.534911   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:15.537280   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:15.537293   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:15.537299   29027 round_trippers.go:580]     Audit-Id: 34af4cc5-3ceb-4e39-ba92-8adb75e37b52
	I0906 15:14:15.537307   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:15.537312   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:15.537317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:15.537322   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:15.537326   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:15 GMT
	I0906 15:14:15.537388   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:15.562417   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:14:15.661103   29027 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0906 15:14:15.663029   29027 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0906 15:14:15.665062   29027 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:14:15.667212   29027 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0906 15:14:15.669201   29027 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0906 15:14:15.674660   29027 command_runner.go:130] > pod/storage-provisioner configured
	I0906 15:14:15.730046   29027 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0906 15:14:15.778498   29027 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:14:15.799376   29027 addons.go:414] enableAddons completed in 643.899729ms
	I0906 15:14:16.038054   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:16.038072   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.038081   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.038087   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.041162   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:16.041179   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.041185   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.041191   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.041196   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.041201   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.041206   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.041210   29027 round_trippers.go:580]     Audit-Id: 87b7b163-5168-4afe-89fe-fb71533a4074
	I0906 15:14:16.041278   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:16.041595   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:16.041605   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.041611   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.041615   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.043538   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:16.043547   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.043553   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.043558   29027 round_trippers.go:580]     Audit-Id: b767f0fe-33b6-428c-97bb-feb252fa3bf0
	I0906 15:14:16.043563   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.043567   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.043572   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.043582   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.043630   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:16.538109   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:16.538123   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.538132   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.538139   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.541177   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:16.541187   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.541192   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.541197   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.541202   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.541206   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.541211   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.541216   29027 round_trippers.go:580]     Audit-Id: 799725a2-520c-49a0-8eb0-32c857f93046
	I0906 15:14:16.541280   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:16.541572   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:16.541578   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:16.541585   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:16.541592   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:16.543400   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:16.543410   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:16.543415   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:16 GMT
	I0906 15:14:16.543420   29027 round_trippers.go:580]     Audit-Id: 07cb6d2e-4b64-46dc-ae91-4a0dd994d3d4
	I0906 15:14:16.543428   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:16.543434   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:16.543438   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:16.543443   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:16.543618   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.038578   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:17.038603   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.038615   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.038626   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.041741   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:17.041754   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.041764   29027 round_trippers.go:580]     Audit-Id: a7f36701-16dc-4907-826a-364df98443f6
	I0906 15:14:17.041773   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.041790   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.041799   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.041808   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.041820   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.041995   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:17.042288   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:17.042294   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.042300   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.042305   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.044055   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:17.044072   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.044090   29027 round_trippers.go:580]     Audit-Id: 9fe55fe8-abaa-4b0a-bd70-02ec59a03f2f
	I0906 15:14:17.044103   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.044113   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.044122   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.044130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.044137   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.044183   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.537934   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:17.537951   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.537963   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.537974   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.541074   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:17.541087   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.541093   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.541097   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.541102   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.541106   29027 round_trippers.go:580]     Audit-Id: 3e3506e9-5f28-4ab8-b88c-c33c4834bd3b
	I0906 15:14:17.541114   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.541119   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.541183   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:17.541483   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:17.541489   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:17.541494   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:17.541499   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:17.543268   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:17.543278   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:17.543284   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:17.543291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:17.543297   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:17 GMT
	I0906 15:14:17.543307   29027 round_trippers.go:580]     Audit-Id: 19364591-28a4-4447-abc3-fd3e6269d908
	I0906 15:14:17.543312   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:17.543317   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:17.543594   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:17.543777   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:18.037768   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:18.037786   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.037794   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.037801   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.040812   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:18.040826   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.040831   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.040852   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.040860   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.040865   29027 round_trippers.go:580]     Audit-Id: 9287653c-0505-4e9b-ac66-e890953d6357
	I0906 15:14:18.040869   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.040877   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.041041   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:18.041369   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:18.041377   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.041383   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.041387   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.043316   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:18.043328   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.043334   29027 round_trippers.go:580]     Audit-Id: 3ebbc5ed-527e-4e36-a259-75b8cbda2f75
	I0906 15:14:18.043338   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.043342   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.043364   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.043373   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.043380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.043424   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:18.537858   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:18.537875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.537884   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.537891   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.540967   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:18.540980   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.540986   29027 round_trippers.go:580]     Audit-Id: 580c7abc-d1eb-4d7a-ba5a-5a7bacf8f3dc
	I0906 15:14:18.540991   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.540995   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.540999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.541024   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.541029   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.541099   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:18.541391   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:18.541397   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:18.541403   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:18.541409   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:18.543264   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:18.543273   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:18.543278   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:18 GMT
	I0906 15:14:18.543283   29027 round_trippers.go:580]     Audit-Id: 2118579d-4f1b-4484-ab03-e6ee5545445d
	I0906 15:14:18.543288   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:18.543292   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:18.543298   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:18.543303   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:18.543352   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:19.037851   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:19.037875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.037883   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.037891   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.040667   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:19.040680   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.040688   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.040699   29027 round_trippers.go:580]     Audit-Id: 7c6ec055-f067-4c33-824d-d9339b29d487
	I0906 15:14:19.040710   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.040723   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.040732   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.040738   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.040806   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:19.041171   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:19.041178   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.041187   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.041194   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.043616   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:19.043634   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.043641   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.043647   29027 round_trippers.go:580]     Audit-Id: 1fb43a50-be41-4d88-8aaa-fc6e71f51b8f
	I0906 15:14:19.043655   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.043663   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.043672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.043678   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.043731   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:19.537849   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:19.537866   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.537876   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.537884   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.540988   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:19.541000   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.541006   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.541010   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.541014   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.541019   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.541024   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.541033   29027 round_trippers.go:580]     Audit-Id: 5945fa61-8216-46ce-85bf-1dbce6dbe601
	I0906 15:14:19.541102   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:19.541404   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:19.541410   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:19.541415   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:19.541421   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:19.543268   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:19.543277   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:19.543283   29027 round_trippers.go:580]     Audit-Id: ba84b224-dabb-4d94-bd00-edbf1e790d1e
	I0906 15:14:19.543289   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:19.543296   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:19.543306   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:19.543311   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:19.543316   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:19 GMT
	I0906 15:14:19.543361   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:20.037761   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:20.037774   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.037781   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.037786   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.044037   29027 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 15:14:20.044050   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.044056   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.044063   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.044069   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.044075   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.044079   29027 round_trippers.go:580]     Audit-Id: 9612119d-2f38-4aea-ab63-9c71e920c73f
	I0906 15:14:20.044084   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.045023   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:20.045347   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:20.045353   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.045359   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.045364   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.047696   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:20.047708   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.047715   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.047722   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.047728   29027 round_trippers.go:580]     Audit-Id: a109a508-b05a-493c-a1d6-d5bcd804b6d3
	I0906 15:14:20.047733   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.047741   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.047746   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.047882   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:20.048084   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:20.539210   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:20.539228   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.539237   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.539244   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.542262   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:20.542277   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.542285   29027 round_trippers.go:580]     Audit-Id: 77e69f45-a96b-42dc-8c82-df3386a476c2
	I0906 15:14:20.542291   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.542296   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.542301   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.542305   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.542313   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.542383   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:20.542725   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:20.542732   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:20.542738   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:20.542743   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:20.544682   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:20.544691   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:20.544696   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:20 GMT
	I0906 15:14:20.544701   29027 round_trippers.go:580]     Audit-Id: 7da4aa16-d483-406f-a975-66d6dee6f8d1
	I0906 15:14:20.544706   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:20.544710   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:20.544716   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:20.544721   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:20.544772   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:21.039890   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:21.039919   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.039933   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.039943   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.043716   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:21.043728   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.043733   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.043737   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.043742   29027 round_trippers.go:580]     Audit-Id: 8b605e70-e3d9-4aef-a3b0-a376d6fa2069
	I0906 15:14:21.043752   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.043756   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.043760   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.044058   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:21.044356   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:21.044362   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.044367   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.044373   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.046149   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:21.046157   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.046161   29027 round_trippers.go:580]     Audit-Id: 995dc6cb-c3bc-4850-b49b-d8d701f507f0
	I0906 15:14:21.046166   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.046173   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.046178   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.046183   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.046187   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.046234   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:21.538965   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:21.538984   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.538993   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.539000   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.542223   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:21.542237   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.542243   29027 round_trippers.go:580]     Audit-Id: 39edbcda-5de0-4720-b996-4908b757a8f2
	I0906 15:14:21.542251   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.542255   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.542260   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.542268   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.542273   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.542341   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:21.542644   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:21.542651   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:21.542657   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:21.542662   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:21.544922   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:21.544933   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:21.544940   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:21.544945   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:21.544950   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:21.544955   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:21.544960   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:21 GMT
	I0906 15:14:21.544965   29027 round_trippers.go:580]     Audit-Id: a62ea4a9-3f8b-4e61-b73d-3f4bd8cca9e5
	I0906 15:14:21.545160   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.039504   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:22.039550   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.039573   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.039581   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.042664   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:22.042676   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.042681   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.042685   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.042690   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.042695   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.042700   29027 round_trippers.go:580]     Audit-Id: bce130fe-a5c2-4231-a10d-6bf1335b6362
	I0906 15:14:22.042704   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.042763   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:22.043055   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:22.043061   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.043067   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.043072   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.045572   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:22.045581   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.045588   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.045594   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.045598   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.045603   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.045608   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.045612   29027 round_trippers.go:580]     Audit-Id: 1263b504-66e0-49f8-9a13-0a8a05dc31b3
	I0906 15:14:22.045656   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.537803   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:22.537814   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.537820   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.537825   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.540282   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:22.540292   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.540298   29027 round_trippers.go:580]     Audit-Id: 412e21e7-8317-43c3-ac4e-1ed170d65eb5
	I0906 15:14:22.540305   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.540316   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.540327   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.540357   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.540369   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.540636   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:22.540926   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:22.540932   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:22.540937   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:22.540942   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:22.542685   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:22.542694   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:22.542699   29027 round_trippers.go:580]     Audit-Id: a76985b3-d8ce-4cb3-b9f7-4c47407dc47b
	I0906 15:14:22.542704   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:22.542709   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:22.542713   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:22.542718   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:22.542723   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:22 GMT
	I0906 15:14:22.542765   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:22.542965   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:23.038174   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:23.038198   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.038207   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.038214   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.041500   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:23.041513   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.041519   29027 round_trippers.go:580]     Audit-Id: 591fb38b-0f96-4830-9a74-0b47b227645d
	I0906 15:14:23.041523   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.041528   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.041532   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.041537   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.041541   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.041609   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:23.041924   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:23.041930   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.041936   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.041941   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.044053   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:23.044062   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.044068   29027 round_trippers.go:580]     Audit-Id: e25c8357-9972-4742-bf50-867b9524a93d
	I0906 15:14:23.044073   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.044077   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.044082   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.044087   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.044092   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.044139   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:23.538283   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:23.538303   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.538315   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.538325   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.541350   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:23.541360   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.541367   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.541373   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.541377   29027 round_trippers.go:580]     Audit-Id: 6fe13596-2642-46b9-8f2c-0394450a1f89
	I0906 15:14:23.541382   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.541386   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.541391   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.541451   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:23.541736   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:23.541742   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:23.541748   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:23.541752   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:23.543611   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:23.543621   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:23.543627   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:23.543631   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:23.543636   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:23 GMT
	I0906 15:14:23.543641   29027 round_trippers.go:580]     Audit-Id: 768595bc-4745-4d7f-8207-faa5ca98df79
	I0906 15:14:23.543646   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:23.543651   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:23.543725   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.038331   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:24.038351   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.038375   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.038384   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.041568   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:24.041581   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.041591   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.041598   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.041605   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.041615   29027 round_trippers.go:580]     Audit-Id: 2a52053e-d3e1-4ac8-9d25-60e7f4f2323e
	I0906 15:14:24.041623   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.041628   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.041685   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:24.041998   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:24.042006   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.042017   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.042031   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.043801   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:24.043809   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.043814   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.043819   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.043824   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.043829   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.043833   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.043840   29027 round_trippers.go:580]     Audit-Id: 7704f072-c5f6-4b6a-8081-160a4ee8313e
	I0906 15:14:24.043888   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.537911   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:24.537928   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.537937   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.537948   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.540939   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:24.540950   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.540955   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.540960   29027 round_trippers.go:580]     Audit-Id: d329839b-c8ec-4dad-8517-e9aa9323fb02
	I0906 15:14:24.540965   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.540969   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.540975   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.540981   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.541051   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:24.541330   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:24.541336   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:24.541343   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:24.541354   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:24.543088   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:24.543096   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:24.543101   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:24.543106   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:24.543111   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:24.543115   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:24.543120   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:24 GMT
	I0906 15:14:24.543124   29027 round_trippers.go:580]     Audit-Id: aed0ecbd-59d0-44c8-835d-baad3c05e210
	I0906 15:14:24.543808   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:24.544232   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:25.037816   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:25.037831   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.037837   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.037842   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.040674   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:25.040684   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.040689   29027 round_trippers.go:580]     Audit-Id: 851b12d6-b83c-4c40-b2d7-aab8cc966a29
	I0906 15:14:25.040694   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.040698   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.040703   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.040707   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.040712   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.040781   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:25.041080   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:25.041087   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.041093   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.041098   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.042909   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:25.042918   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.042924   29027 round_trippers.go:580]     Audit-Id: 79a5c82b-4b00-4387-bb8d-e2d369f36fff
	I0906 15:14:25.042930   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.042941   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.042948   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.042953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.042958   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.043175   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:25.538087   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:25.538100   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.538106   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.538111   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.540524   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:25.540534   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.540540   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.540544   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.540549   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.540553   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.540558   29027 round_trippers.go:580]     Audit-Id: 8e68392b-a1f7-4713-8192-7b131bb32e7f
	I0906 15:14:25.540563   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.540638   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:25.540947   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:25.540952   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:25.540958   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:25.540963   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:25.542868   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:25.542877   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:25.542883   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:25.542887   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:25.542892   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:25.542897   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:25.542902   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:25 GMT
	I0906 15:14:25.542906   29027 round_trippers.go:580]     Audit-Id: 4f11cc81-344e-4edc-a496-6c1168c4ea2f
	I0906 15:14:25.542992   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.038022   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:26.038047   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.038058   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.038068   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.041630   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.041643   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.041649   29027 round_trippers.go:580]     Audit-Id: 61b535f8-0765-47d2-ad21-24bfc2ffe936
	I0906 15:14:26.041659   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.041664   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.041669   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.041674   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.041680   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.041755   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:26.042056   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:26.042062   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.042070   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.042078   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.043890   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:26.043900   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.043905   29027 round_trippers.go:580]     Audit-Id: ba9e1b11-219e-4879-b1c9-158b55a783fb
	I0906 15:14:26.043910   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.043915   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.043920   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.043924   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.043929   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.043976   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.539863   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:26.539883   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.539895   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.539905   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.542923   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.542935   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.542940   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.542945   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.542949   29027 round_trippers.go:580]     Audit-Id: 9430c572-4184-4355-ad38-6c0a27cd5b02
	I0906 15:14:26.542954   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.542958   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.542962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.543033   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:26.543333   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:26.543339   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:26.543347   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:26.543354   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:26.546660   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:26.546672   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:26.546677   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:26 GMT
	I0906 15:14:26.546682   29027 round_trippers.go:580]     Audit-Id: a96cd79a-5607-422f-83e0-9e89709c8242
	I0906 15:14:26.546686   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:26.546691   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:26.546699   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:26.546705   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:26.547155   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:26.547349   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:27.037849   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:27.037879   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.037926   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.037940   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.041763   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:27.041781   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.041788   29027 round_trippers.go:580]     Audit-Id: 515478cb-9474-4102-9159-ddbe923a3452
	I0906 15:14:27.041794   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.041803   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.041810   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.041818   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.041826   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.041912   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:27.042289   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:27.042296   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.042304   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.042310   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.044256   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:27.044266   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.044271   29027 round_trippers.go:580]     Audit-Id: 5f9dda77-c625-4131-bfef-754b506115e0
	I0906 15:14:27.044277   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.044281   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.044286   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.044291   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.044296   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.044422   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:27.538972   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:27.538997   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.539013   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.539025   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.542375   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:27.542388   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.542399   29027 round_trippers.go:580]     Audit-Id: d6ed2255-f7ae-494f-8992-986068b49dd6
	I0906 15:14:27.542405   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.542409   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.542414   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.542419   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.542424   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.542489   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:27.542780   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:27.542787   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:27.542796   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:27.542809   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:27.545153   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:27.545163   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:27.545168   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:27 GMT
	I0906 15:14:27.545174   29027 round_trippers.go:580]     Audit-Id: 5f42c095-2b94-4d85-bf17-2e09887c6c8e
	I0906 15:14:27.545178   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:27.545183   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:27.545187   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:27.545192   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:27.545240   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:28.037831   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:28.037856   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.037890   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.037905   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.041237   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:28.041246   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.041252   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.041260   29027 round_trippers.go:580]     Audit-Id: fc626f2b-7bcb-4aad-af18-57d04d7d2dba
	I0906 15:14:28.041265   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.041270   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.041301   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.041306   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.041365   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:28.041669   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:28.041676   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.041681   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.041686   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.044357   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:28.044368   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.044373   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.044379   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.044387   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.044392   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.044397   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.044402   29027 round_trippers.go:580]     Audit-Id: 78f93e3c-9e82-4f4c-98e3-b4e0bcbef40b
	I0906 15:14:28.044452   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:28.539927   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:28.539951   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.539964   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.539975   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.543352   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:28.543365   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.543370   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.543375   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.543379   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.543384   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.543389   29027 round_trippers.go:580]     Audit-Id: 6c4609a8-f36f-45fd-a5b1-586241096d7f
	I0906 15:14:28.543393   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.543462   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:28.543759   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:28.543765   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:28.543772   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:28.543777   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:28.545526   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:28.545536   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:28.545541   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:28.545546   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:28 GMT
	I0906 15:14:28.545554   29027 round_trippers.go:580]     Audit-Id: e21e4e6d-bd7e-4da1-947a-69bbab18a276
	I0906 15:14:28.545558   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:28.545563   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:28.545567   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:28.545615   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:29.039636   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:29.039660   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.039696   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.039734   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.043597   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:29.043613   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.043626   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.043634   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.043642   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.043655   29027 round_trippers.go:580]     Audit-Id: 93ca9f34-5dff-48cf-af91-d3d03e7f89ef
	I0906 15:14:29.043662   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.043670   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.043761   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:29.044148   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:29.044154   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.044159   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.044164   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.045912   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:29.045922   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.045930   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.045937   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.045943   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.045951   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.045957   29027 round_trippers.go:580]     Audit-Id: 59fb3f2a-46ca-4ac4-81d2-848a09e43435
	I0906 15:14:29.045978   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.046193   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:29.046386   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:29.539964   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:29.539985   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.539998   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.540009   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.543741   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:29.543757   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.543765   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.543771   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.543777   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.543784   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.543790   29027 round_trippers.go:580]     Audit-Id: bcafdd23-527d-4cbb-b4ff-d990e5f55a54
	I0906 15:14:29.543797   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.543876   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:29.544247   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:29.544261   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:29.544269   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:29.544278   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:29.546155   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:29.546164   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:29.546170   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:29.546174   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:29.546180   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:29 GMT
	I0906 15:14:29.546185   29027 round_trippers.go:580]     Audit-Id: 94e1effe-2fa0-4bd1-b2ac-7acf70a128a1
	I0906 15:14:29.546189   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:29.546194   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:29.546238   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:30.039893   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:30.039915   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.039927   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.039938   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.043031   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:30.043041   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.043047   29027 round_trippers.go:580]     Audit-Id: cce90fd0-f2cc-4157-a016-67f619a6fb83
	I0906 15:14:30.043068   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.043082   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.043088   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.043094   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.043099   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.043205   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:30.043497   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:30.043503   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.043509   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.043514   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.045579   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:30.045590   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.045597   29027 round_trippers.go:580]     Audit-Id: b9871ec8-59fa-4161-9d1c-7f8528e230d9
	I0906 15:14:30.045604   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.045609   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.045613   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.045618   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.045622   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.045679   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:30.539363   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:30.539385   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.539407   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.539418   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.543190   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:30.543205   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.543212   29027 round_trippers.go:580]     Audit-Id: 61c3f193-0ca9-474a-a53f-383ef29bb613
	I0906 15:14:30.543219   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.543227   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.543234   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.543239   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.543245   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.543347   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:30.543681   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:30.543688   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:30.543694   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:30.543700   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:30.545826   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:30.545836   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:30.545842   29027 round_trippers.go:580]     Audit-Id: 9201c637-93f1-4825-9e5f-360f20d666c7
	I0906 15:14:30.545846   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:30.545852   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:30.545857   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:30.545862   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:30.545867   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:30 GMT
	I0906 15:14:30.545913   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.038882   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:31.038908   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.038945   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.038957   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.042512   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:31.042527   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.042534   29027 round_trippers.go:580]     Audit-Id: 531cc090-20f7-410d-8f74-4d55ac670997
	I0906 15:14:31.042540   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.042546   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.042552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.042557   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.042563   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.042646   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:31.043051   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:31.043059   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.043069   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.043078   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.044937   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:31.044947   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.044952   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.044957   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.044962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.044966   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.044971   29027 round_trippers.go:580]     Audit-Id: 07e71ef2-6658-4426-8df1-efeb67d89052
	I0906 15:14:31.044975   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.045020   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.537964   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:31.537980   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.537989   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.537996   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.541432   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:31.541445   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.541450   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.541455   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.541459   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.541463   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.541468   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.541473   29027 round_trippers.go:580]     Audit-Id: c0e9e5ed-9091-4fb8-9cda-6942653c6955
	I0906 15:14:31.541538   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:31.541830   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:31.541837   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:31.541842   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:31.541847   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:31.543838   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:31.543848   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:31.543853   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:31.543861   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:31.543866   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:31.543871   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:31.543876   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:31 GMT
	I0906 15:14:31.543881   29027 round_trippers.go:580]     Audit-Id: 96f5dda0-c486-4bd4-ae60-b2d0873ecf41
	I0906 15:14:31.543926   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:31.544105   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:32.039898   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:32.039921   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.039932   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.039952   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.043672   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:32.043694   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.043706   29027 round_trippers.go:580]     Audit-Id: 56e1715f-478d-458a-ace0-c8ce280ce079
	I0906 15:14:32.043716   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.043730   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.043742   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.043748   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.043755   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.043954   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:32.044344   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:32.044353   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.044361   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.044369   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.046094   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:32.046103   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.046108   29027 round_trippers.go:580]     Audit-Id: 8c441bcd-0e21-419d-9c94-34a075cc5693
	I0906 15:14:32.046115   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.046121   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.046125   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.046130   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.046135   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.046286   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:32.539953   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:32.539974   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.539986   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.539997   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.543962   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:32.543984   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.543992   29027 round_trippers.go:580]     Audit-Id: 165ecdbf-3f6c-453f-9aef-28fecb40db00
	I0906 15:14:32.543999   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.544006   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.544012   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.544019   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.544026   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.544106   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:32.544464   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:32.544470   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:32.544476   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:32.544481   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:32.546569   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:32.546579   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:32.546586   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:32.546591   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:32.546596   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:32.546600   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:32.546605   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:32 GMT
	I0906 15:14:32.546609   29027 round_trippers.go:580]     Audit-Id: 1f744d63-e525-4fac-a473-05ab78dc9ebb
	I0906 15:14:32.546652   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.037823   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:33.037847   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.037858   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.037868   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.041330   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:33.041340   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.041346   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.041351   29027 round_trippers.go:580]     Audit-Id: f27f0a43-4d50-4f38-b610-9d8afaa6dc95
	I0906 15:14:33.041357   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.041361   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.041366   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.041373   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.041527   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:33.041819   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:33.041825   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.041831   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.041836   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.043735   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:33.043757   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.043768   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.043775   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.043782   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.043786   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.043791   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.043795   29027 round_trippers.go:580]     Audit-Id: 8d0a829a-ce1a-4890-ac17-599ce13dd5ec
	I0906 15:14:33.043998   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.539975   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:33.539996   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.540009   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.540020   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.544191   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:33.544206   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.544217   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.544224   29027 round_trippers.go:580]     Audit-Id: f7b8b39c-8bb5-4ece-9469-310c608b0dd7
	I0906 15:14:33.544232   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.544238   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.544244   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.544250   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.544320   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:33.544676   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:33.544682   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:33.544688   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:33.544693   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:33.546731   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:33.546740   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:33.546745   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:33.546750   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:33 GMT
	I0906 15:14:33.546759   29027 round_trippers.go:580]     Audit-Id: ce54d76a-a089-4b31-89f8-97bf10d4a501
	I0906 15:14:33.546764   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:33.546770   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:33.546775   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:33.546821   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:33.547002   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:34.037906   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:34.037929   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.037940   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.037975   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.041313   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:34.041328   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.041337   29027 round_trippers.go:580]     Audit-Id: 64d38398-62c9-4bce-ae7f-bd85c6b65d1b
	I0906 15:14:34.041345   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.041356   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.041367   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.041374   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.041380   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.041452   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:34.041762   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:34.041768   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.041774   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.041780   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.043774   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:34.043785   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.043794   29027 round_trippers.go:580]     Audit-Id: 2288ac3d-0a98-46e0-87f6-9285f21857c4
	I0906 15:14:34.043800   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.043806   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.043814   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.043821   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.043827   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.043888   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:34.538926   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:34.538950   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.538967   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.538978   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.542704   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:34.542721   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.542729   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.542735   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.542745   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.542752   29027 round_trippers.go:580]     Audit-Id: 37e38ea4-93a1-49ae-b3bc-6b9b0253c7ca
	I0906 15:14:34.542762   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.542771   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.542874   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:34.543261   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:34.543268   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:34.543274   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:34.543279   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:34.545298   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:34.545307   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:34.545312   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:34.545317   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:34.545321   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:34.545325   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:34 GMT
	I0906 15:14:34.545330   29027 round_trippers.go:580]     Audit-Id: 5330250b-f95d-4585-934d-10877175d093
	I0906 15:14:34.545334   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:34.545514   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:35.038284   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:35.038337   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.038351   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.038362   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.041757   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:35.041779   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.041797   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.041817   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.041833   29027 round_trippers.go:580]     Audit-Id: 71a7a496-0c0c-4257-8d1f-ac70a35f0b6f
	I0906 15:14:35.041851   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.041863   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.041869   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.042252   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:35.042629   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:35.042636   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.042641   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.042647   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.044460   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:35.044469   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.044475   29027 round_trippers.go:580]     Audit-Id: 75fe9223-54ba-4c47-8655-686d1120cbc3
	I0906 15:14:35.044489   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.044493   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.044498   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.044503   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.044508   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.044556   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:35.537914   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:35.537934   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.537946   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.537956   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.541911   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:35.541928   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.541935   29027 round_trippers.go:580]     Audit-Id: f6fa57a3-5c2c-4efc-b352-52d572e2ad19
	I0906 15:14:35.541941   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.541947   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.541953   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.541962   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.541967   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.542037   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:35.542415   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:35.542422   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:35.542428   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:35.542432   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:35.544602   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:35.544612   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:35.544617   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:35.544622   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:35 GMT
	I0906 15:14:35.544627   29027 round_trippers.go:580]     Audit-Id: 43ee3f60-e708-4124-8b0d-65c3e347f70d
	I0906 15:14:35.544631   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:35.544637   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:35.544641   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:35.544692   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:36.038225   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:36.038249   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.038262   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.038272   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.042208   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:36.042225   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.042234   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.042240   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.042253   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.042261   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.042272   29027 round_trippers.go:580]     Audit-Id: 39835689-2678-4348-8e0e-95c64b867026
	I0906 15:14:36.042279   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.042503   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:36.042883   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:36.042891   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.042899   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.042906   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.044728   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:36.044737   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.044742   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.044747   29027 round_trippers.go:580]     Audit-Id: 51647f90-6eee-4d8c-bc94-a2f56030963d
	I0906 15:14:36.044752   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.044756   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.044761   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.044765   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.044811   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:36.044993   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:36.537858   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:36.537875   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.537884   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.537891   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.541222   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:36.541235   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.541240   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.541244   29027 round_trippers.go:580]     Audit-Id: c3c67978-7f88-4830-9e72-5920158633b7
	I0906 15:14:36.541249   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.541253   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.541257   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.541261   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.541320   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:36.541615   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:36.541623   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:36.541628   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:36.541634   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:36.543498   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:36.543508   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:36.543513   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:36.543518   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:36 GMT
	I0906 15:14:36.543523   29027 round_trippers.go:580]     Audit-Id: 7d60adcb-a0b6-4999-a282-858c60316741
	I0906 15:14:36.543527   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:36.543532   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:36.543536   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:36.543583   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:37.037823   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:37.037842   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.037851   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.037857   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.040773   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:37.040784   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.040790   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.040795   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.040800   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.040804   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.040809   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.040814   29027 round_trippers.go:580]     Audit-Id: 1a2f86e5-2441-4fd0-8195-d2019133953c
	I0906 15:14:37.040879   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:37.041169   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:37.041175   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.041181   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.041186   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.042846   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:37.042859   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.042864   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.042870   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.042874   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.042879   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.042885   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.042890   29027 round_trippers.go:580]     Audit-Id: 1fae7b40-f9cd-4ab2-962b-dd4272cf6f2d
	I0906 15:14:37.043113   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:37.538160   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:37.538182   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.538195   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.538205   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.542175   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:37.542188   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.542195   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.542199   29027 round_trippers.go:580]     Audit-Id: d783c506-c421-44e4-9617-88081656fce3
	I0906 15:14:37.542204   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.542210   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.542217   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.542222   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.542284   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:37.542579   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:37.542586   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:37.542591   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:37.542596   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:37.544454   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:37.544467   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:37.544474   29027 round_trippers.go:580]     Audit-Id: ccc74424-d582-447d-a61c-d611efb0fe29
	I0906 15:14:37.544479   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:37.544483   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:37.544487   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:37.544507   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:37.544514   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:37 GMT
	I0906 15:14:37.544715   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.037933   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:38.037955   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.037968   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.037978   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.041596   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:38.041612   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.041621   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.041628   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.041634   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.041641   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.041647   29027 round_trippers.go:580]     Audit-Id: 028fce43-cdf1-41f1-bce7-8c69600f8ca0
	I0906 15:14:38.041662   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.042180   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:38.042480   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:38.042486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.042492   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.042497   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.044188   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:38.044197   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.044202   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.044207   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.044212   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.044217   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.044221   29027 round_trippers.go:580]     Audit-Id: 5bac9e20-820b-4f53-812d-f9c243cf0ace
	I0906 15:14:38.044226   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.044269   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.539955   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:38.539976   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.539989   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.540000   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.543737   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:38.543753   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.543761   29027 round_trippers.go:580]     Audit-Id: ebad1ffd-9450-454f-a47c-bd82c8be7ada
	I0906 15:14:38.543767   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.543773   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.543786   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.543794   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.543800   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.543877   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:38.544269   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:38.544277   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:38.544285   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:38.544292   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:38.546174   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:38.546183   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:38.546189   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:38.546196   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:38 GMT
	I0906 15:14:38.546201   29027 round_trippers.go:580]     Audit-Id: d7588191-65ef-4132-8017-128edb8db051
	I0906 15:14:38.546205   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:38.546210   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:38.546214   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:38.546262   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:38.546451   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:39.039549   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:39.039575   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.039587   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.039596   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.043236   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:39.043253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.043261   29027 round_trippers.go:580]     Audit-Id: 218150fa-2c5e-47d6-94fe-667af2066226
	I0906 15:14:39.043268   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.043275   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.043282   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.043292   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.043299   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.043381   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:39.043778   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:39.043786   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.043792   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.043797   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.045877   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:39.045886   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.045891   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.045897   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.045902   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.045906   29027 round_trippers.go:580]     Audit-Id: 530c6494-9b9a-4121-9f3d-6191debd34d8
	I0906 15:14:39.045911   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.045916   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.045968   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:39.537933   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:39.537958   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.537970   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.537980   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.541596   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:39.541611   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.541620   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.541629   29027 round_trippers.go:580]     Audit-Id: dd0f8089-7d7c-4ba5-b6f5-47307c574ba0
	I0906 15:14:39.541636   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.541641   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.541649   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.541655   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.541735   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:39.542064   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:39.542070   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:39.542076   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:39.542081   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:39.544091   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:39.544100   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:39.544105   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:39.544110   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:39.544115   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:39.544120   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:39 GMT
	I0906 15:14:39.544126   29027 round_trippers.go:580]     Audit-Id: 8e21543b-8e1c-40ad-9b9f-e049205354ed
	I0906 15:14:39.544134   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:39.544189   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:40.039807   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:40.039822   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.039839   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.039846   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.042204   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.042214   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.042220   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.042225   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.042230   29027 round_trippers.go:580]     Audit-Id: 341d1a87-53a8-4ba0-b93f-83e7be8dd858
	I0906 15:14:40.042234   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.042239   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.042244   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.042301   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:40.042607   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:40.042613   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.042620   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.042625   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.044831   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.044844   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.044852   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.044859   29027 round_trippers.go:580]     Audit-Id: ca9fab87-5d83-4b07-8236-fcdc7c0609fd
	I0906 15:14:40.044866   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.044874   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.044880   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.044912   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.045255   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:40.537997   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:40.538025   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.538062   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.538085   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.542035   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:40.542046   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.542052   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.542059   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.542070   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.542075   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.542080   29027 round_trippers.go:580]     Audit-Id: 869bac44-2821-431a-8551-4026d49dabdf
	I0906 15:14:40.542099   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.542222   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:40.542507   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:40.542513   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:40.542518   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:40.542524   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:40.544556   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:40.544565   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:40.544571   29027 round_trippers.go:580]     Audit-Id: ed80ec78-8c1a-4b2c-9179-1ce76f9dffe8
	I0906 15:14:40.544576   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:40.544581   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:40.544585   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:40.544590   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:40.544596   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:40 GMT
	I0906 15:14:40.544636   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:41.038740   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:41.038761   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.038773   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.038782   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.042020   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:41.042035   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.042046   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.042055   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.042062   29027 round_trippers.go:580]     Audit-Id: 4fa68311-2376-4028-a0e4-56aa10a3f1b3
	I0906 15:14:41.042070   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.042074   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.042080   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.042235   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:41.042526   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:41.042534   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.042539   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.042544   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.044522   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:41.044531   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.044537   29027 round_trippers.go:580]     Audit-Id: 5f68ac15-3524-4efb-bdd9-6ebf142802f1
	I0906 15:14:41.044542   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.044547   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.044552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.044557   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.044562   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.044605   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:41.044784   29027 pod_ready.go:102] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"False"
	I0906 15:14:41.538066   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:41.538085   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.538097   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.538106   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.541536   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:41.541545   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.541550   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.541555   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.541560   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.541565   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.541569   29027 round_trippers.go:580]     Audit-Id: b2f8bca3-69d0-4895-8527-95bff029cb9a
	I0906 15:14:41.541574   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.541625   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:41.541898   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:41.541907   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:41.541913   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:41.541926   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:41.543867   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:41.543875   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:41.543880   29027 round_trippers.go:580]     Audit-Id: 4f02517e-1c8e-4c49-9953-7d91575fcd36
	I0906 15:14:41.543890   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:41.543894   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:41.543899   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:41.543903   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:41.543909   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:41 GMT
	I0906 15:14:41.544211   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:42.039941   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:42.039966   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.040001   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.040013   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.043791   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:42.043807   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.043814   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.043822   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.043830   29027 round_trippers.go:580]     Audit-Id: d2389df5-8b8c-41e2-8c7a-57ed0fdb8ef0
	I0906 15:14:42.043835   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.043841   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.043848   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.043927   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:42.044304   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:42.044311   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.044316   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.044323   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.046372   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:42.046380   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.046385   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.046390   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.046395   29027 round_trippers.go:580]     Audit-Id: 7473e3da-0e35-4a03-876d-65c4fadc059a
	I0906 15:14:42.046400   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.046405   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.046409   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.046453   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:42.538440   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:42.538455   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.538463   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.538470   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.541381   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:42.541392   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.541397   29027 round_trippers.go:580]     Audit-Id: 23c5e34c-3f80-4500-aa22-855b4ce316a1
	I0906 15:14:42.541404   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.541411   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.541424   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.541429   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.541434   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.541516   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1087","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6794 chars]
	I0906 15:14:42.541809   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:42.541815   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:42.541821   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:42.541827   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:42.543774   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:42.543782   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:42.543787   29027 round_trippers.go:580]     Audit-Id: 2c954877-9aa0-4dba-a851-587df6694bb0
	I0906 15:14:42.543791   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:42.543796   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:42.543801   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:42.543806   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:42.543811   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:42 GMT
	I0906 15:14:42.543872   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.038061   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/coredns-565d847f94-t6l66
	I0906 15:14:43.038077   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.038085   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.038092   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.041143   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.041155   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.041161   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.041166   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.041171   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.041175   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.041180   29027 round_trippers.go:580]     Audit-Id: 8865d532-192a-452e-80b3-da9b88a2ad14
	I0906 15:14:43.041186   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.041241   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6565 chars]
	I0906 15:14:43.041528   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.041535   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.041540   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.041546   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.043233   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.043243   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.043248   29027 round_trippers.go:580]     Audit-Id: fbbc047a-af02-44fd-82d8-f037f2af8273
	I0906 15:14:43.043253   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.043258   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.043262   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.043267   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.043272   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.043312   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.043488   29027 pod_ready.go:92] pod "coredns-565d847f94-t6l66" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.043497   29027 pod_ready.go:81] duration metric: took 27.691382399s waiting for pod "coredns-565d847f94-t6l66" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.043504   29027 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.043531   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/etcd-multinode-20220906150606-22187
	I0906 15:14:43.043535   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.043540   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.043546   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.045244   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.045253   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.045259   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.045264   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.045269   29027 round_trippers.go:580]     Audit-Id: 3bebc7b1-2f53-4b66-b82e-12cd21a2e08a
	I0906 15:14:43.045274   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.045278   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.045286   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.045331   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220906150606-22187","namespace":"kube-system","uid":"b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa","resourceVersion":"1107","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.mirror":"17a8fa534af40e76f49967948d56d723","kubernetes.io/config.seen":"2022-09-06T22:06:35.893944378Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 6114 chars]
	I0906 15:14:43.045542   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.045548   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.045553   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.045558   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.047415   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.047423   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.047429   29027 round_trippers.go:580]     Audit-Id: 4a2cf001-0c50-42d5-809f-4006bfcd5a30
	I0906 15:14:43.047434   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.047439   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.047446   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.047451   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.047455   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.047513   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.047689   29027 pod_ready.go:92] pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.047696   29027 pod_ready.go:81] duration metric: took 4.186455ms waiting for pod "etcd-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.047711   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.047737   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220906150606-22187
	I0906 15:14:43.047741   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.047746   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.047752   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.049532   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.049541   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.049547   29027 round_trippers.go:580]     Audit-Id: f227be37-84e5-469b-b9b4-166bdc35fec8
	I0906 15:14:43.049552   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.049557   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.049563   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.049568   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.049573   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.049633   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220906150606-22187","namespace":"kube-system","uid":"b8fcee55-a96c-4a49-9872-f5c791daf820","resourceVersion":"1113","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.mirror":"b09d9c5ed5d683a33e80745b32adae59","kubernetes.io/config.seen":"2022-09-06T22:06:35.893957881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 8470 chars]
	I0906 15:14:43.049884   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.049890   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.049895   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.049900   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.051758   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.051766   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.051771   29027 round_trippers.go:580]     Audit-Id: ae16ea01-400a-4582-9051-668d3bea4818
	I0906 15:14:43.051776   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.051781   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.051785   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.051790   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.051795   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.051833   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.052006   29027 pod_ready.go:92] pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.052012   29027 pod_ready.go:81] duration metric: took 4.295043ms waiting for pod "kube-apiserver-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.052018   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.052044   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220906150606-22187
	I0906 15:14:43.052048   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.052053   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.052058   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.053747   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.053756   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.053762   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.053767   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.053772   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.053777   29027 round_trippers.go:580]     Audit-Id: f1e8ef5c-50e0-4e8c-85a2-65960e0be433
	I0906 15:14:43.053783   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.053787   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.053849   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220906150606-22187","namespace":"kube-system","uid":"d9ca106c-c765-4535-9cda-609a956ab91d","resourceVersion":"1120","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.mirror":"45202fd7561fb99c09f27d6e5d0ba714","kubernetes.io/config.seen":"2022-09-06T22:06:35.893958755Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf
ig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config. [truncated 8045 chars]
	I0906 15:14:43.054106   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.054113   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.054118   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.054123   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.055985   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.055994   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.055999   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.056005   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.056009   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.056014   29027 round_trippers.go:580]     Audit-Id: 5b58e4a3-da2f-4fef-addc-022f3a7e7cd7
	I0906 15:14:43.056019   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.056024   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.056062   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.056232   29027 pod_ready.go:92] pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.056238   29027 pod_ready.go:81] duration metric: took 4.215573ms waiting for pod "kube-controller-manager-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.056243   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.056267   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-czbjx
	I0906 15:14:43.056270   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.056276   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.056281   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.057908   29027 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 15:14:43.057917   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.057922   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.057927   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.057931   29027 round_trippers.go:580]     Audit-Id: 9a92c8bb-6a53-4c05-96ca-eb1282ce2a3d
	I0906 15:14:43.057936   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.057940   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.057945   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.057983   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-czbjx","generateName":"kube-proxy-","namespace":"kube-system","uid":"c88daf0a-05d7-45b7-b888-8e0749e4d321","resourceVersion":"887","creationTimestamp":"2022-09-06T22:08:13Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:08:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5997 chars]
	I0906 15:14:43.058217   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m03
	I0906 15:14:43.058222   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.058228   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.058234   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.059698   29027 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0906 15:14:43.059707   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.059712   29027 round_trippers.go:580]     Audit-Id: 43e2cfff-ad35-436c-8e57-c315e7da8720
	I0906 15:14:43.059717   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.059722   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.059728   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.059733   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.059738   29027 round_trippers.go:580]     Content-Length: 238
	I0906 15:14:43.059742   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.059752   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-20220906150606-22187-m03\" not found","reason":"NotFound","details":{"name":"multinode-20220906150606-22187-m03","kind":"nodes"},"code":404}
	I0906 15:14:43.059790   29027 pod_ready.go:97] node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:43.059796   29027 pod_ready.go:81] duration metric: took 3.54821ms waiting for pod "kube-proxy-czbjx" in "kube-system" namespace to be "Ready" ...
	E0906 15:14:43.059801   29027 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-20220906150606-22187-m03" hosting pod "kube-proxy-czbjx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-20220906150606-22187-m03": nodes "multinode-20220906150606-22187-m03" not found
	I0906 15:14:43.059805   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.238308   29027 request.go:533] Waited for 178.471314ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:43.238364   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-kkmpm
	I0906 15:14:43.238370   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.238379   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.238387   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.241058   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:43.241068   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.241073   29027 round_trippers.go:580]     Audit-Id: 7e638391-714a-4e9c-917f-3c9e5d4ba643
	I0906 15:14:43.241078   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.241083   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.241087   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.241092   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.241097   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.241144   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kkmpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b228e9a-6577-46a3-b848-9c9fca602ba6","resourceVersion":"1084","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5765 chars]
	I0906 15:14:43.438728   29027 request.go:533] Waited for 197.273813ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.438784   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:43.438793   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.438827   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.438845   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.442512   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.442523   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.442529   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.442535   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.442539   29027 round_trippers.go:580]     Audit-Id: 2f1b0aa2-f39a-48ce-b6ae-e998cb6dfb48
	I0906 15:14:43.442543   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.442547   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.442552   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.442608   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:43.442797   29027 pod_ready.go:92] pod "kube-proxy-kkmpm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.442804   29027 pod_ready.go:81] duration metric: took 382.989962ms waiting for pod "kube-proxy-kkmpm" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.442811   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.639199   29027 request.go:533] Waited for 196.338015ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:43.639280   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-proxy-wnrrx
	I0906 15:14:43.639288   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.639315   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.639324   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.642038   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:43.642050   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.642083   29027 round_trippers.go:580]     Audit-Id: 30533f7f-924d-4b97-beda-f06fdc552b35
	I0906 15:14:43.642090   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.642094   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.642099   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.642104   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.642109   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.642156   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wnrrx","generateName":"kube-proxy-","namespace":"kube-system","uid":"260cbcc2-7110-48ce-aa3d-482b3694ae6d","resourceVersion":"897","creationTimestamp":"2022-09-06T22:07:33Z","labels":{"controller-revision-hash":"55c79b8759","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"421ade55-d00d-4be3-8923-d7446ffeed8d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:07:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"421ade55-d00d-4be3-8923-d7446ffeed8d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5770 chars]
	I0906 15:14:43.838226   29027 request.go:533] Waited for 195.806474ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:43.838339   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:43.838348   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:43.838363   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:43.838373   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:43.841615   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:43.841629   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:43.841636   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:43.841642   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:43.841648   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:43.841653   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:43 GMT
	I0906 15:14:43.841659   29027 round_trippers.go:580]     Audit-Id: 943a0961-24b2-4ca1-a9c3-ef8109397731
	I0906 15:14:43.841665   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:43.841867   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187-m02","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a","resourceVersion":"833","creationTimestamp":"2022-09-06T22:10:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187-m02","kubernetes.io/os":"linux"},"annotations":{"node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:10:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":
{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-at [truncated 3821 chars]
	I0906 15:14:43.842080   29027 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:43.842091   29027 pod_ready.go:81] duration metric: took 399.272423ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:43.842100   29027 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:44.039245   29027 request.go:533] Waited for 197.100451ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:44.039314   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220906150606-22187
	I0906 15:14:44.039319   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.039329   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.039352   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.042376   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:44.042388   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.042394   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.042400   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.042404   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.042409   29027 round_trippers.go:580]     Audit-Id: 01081d96-a7e9-4d3f-8c5c-a95b9156ea94
	I0906 15:14:44.042413   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.042419   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.042473   29027 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220906150606-22187","namespace":"kube-system","uid":"ada7d5af-ae80-465b-b63c-866ee9dbba95","resourceVersion":"1138","creationTimestamp":"2022-09-06T22:06:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.mirror":"333cc135889bd900cfb5a9d04b40b6ea","kubernetes.io/config.seen":"2022-09-06T22:06:35.893959393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:k
ubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:lab [truncated 4927 chars]
	I0906 15:14:44.238080   29027 request.go:533] Waited for 195.37051ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:44.238111   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187
	I0906 15:14:44.238116   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.238123   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.238129   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.240543   29027 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 15:14:44.240555   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.240560   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.240565   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.240569   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.240574   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.240579   29027 round_trippers.go:580]     Audit-Id: 7f4378fc-cd05-4f9e-8909-d4a7a10a4446
	I0906 15:14:44.240583   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.240634   29027 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-0 [truncated 5377 chars]
	I0906 15:14:44.241292   29027 pod_ready.go:92] pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:14:44.241398   29027 pod_ready.go:81] duration metric: took 399.288758ms waiting for pod "kube-scheduler-multinode-20220906150606-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:14:44.241415   29027 pod_ready.go:38] duration metric: took 28.8967773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:14:44.241436   29027 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:14:44.241498   29027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:14:44.252777   29027 command_runner.go:130] > 1605
	I0906 15:14:44.253581   29027 api_server.go:71] duration metric: took 29.098018916s to wait for apiserver process to appear ...
	I0906 15:14:44.253594   29027 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:14:44.253601   29027 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57276/healthz ...
	I0906 15:14:44.258494   29027 api_server.go:266] https://127.0.0.1:57276/healthz returned 200:
	ok
	I0906 15:14:44.258522   29027 round_trippers.go:463] GET https://127.0.0.1:57276/version
	I0906 15:14:44.258526   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.258532   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.258538   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.259534   29027 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 15:14:44.259543   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.259549   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.259554   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.259558   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.259563   29027 round_trippers.go:580]     Content-Length: 261
	I0906 15:14:44.259567   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.259572   29027 round_trippers.go:580]     Audit-Id: 21b0a84c-d97e-4539-935f-c58786521315
	I0906 15:14:44.259578   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.259593   29027 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.0",
	  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
	  "gitTreeState": "clean",
	  "buildDate": "2022-08-23T17:38:15Z",
	  "goVersion": "go1.19",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 15:14:44.259619   29027 api_server.go:140] control plane version: v1.25.0
	I0906 15:14:44.259625   29027 api_server.go:130] duration metric: took 6.026551ms to wait for apiserver health ...
	I0906 15:14:44.259630   29027 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:14:44.438098   29027 request.go:533] Waited for 178.424817ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.438162   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.438171   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.438182   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.438193   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.443150   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:44.443163   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.443170   29027 round_trippers.go:580]     Audit-Id: 4ac8a134-3cef-4e88-a6bc-552819320443
	I0906 15:14:44.443174   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.443180   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.443186   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.443192   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.443196   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.444883   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86207 chars]
	I0906 15:14:44.446738   29027 system_pods.go:59] 12 kube-system pods found
	I0906 15:14:44.446748   29027 system_pods.go:61] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:44.446752   29027 system_pods.go:61] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:14:44.446756   29027 system_pods.go:61] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:44.446759   29027 system_pods.go:61] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:44.446762   29027 system_pods.go:61] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:44.446766   29027 system_pods.go:61] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:44.446770   29027 system_pods.go:61] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:44.446773   29027 system_pods.go:61] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:44.446776   29027 system_pods.go:61] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:44.446780   29027 system_pods.go:61] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:44.446785   29027 system_pods.go:61] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:44.446791   29027 system_pods.go:61] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:14:44.446796   29027 system_pods.go:74] duration metric: took 187.161934ms to wait for pod list to return data ...
	I0906 15:14:44.446801   29027 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:14:44.638323   29027 request.go:533] Waited for 191.446721ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/default/serviceaccounts
	I0906 15:14:44.638429   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/default/serviceaccounts
	I0906 15:14:44.638438   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.638447   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.638459   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.641641   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:44.641654   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.641660   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.641665   29027 round_trippers.go:580]     Audit-Id: eb1f03cd-86cd-4381-b321-768925f237ea
	I0906 15:14:44.641670   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.641674   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.641680   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.641684   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.641688   29027 round_trippers.go:580]     Content-Length: 262
	I0906 15:14:44.641701   29027 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2535e7c3-51eb-44d2-8df8-c188db57dc73","resourceVersion":"310","creationTimestamp":"2022-09-06T22:06:47Z"}}]}
	I0906 15:14:44.641819   29027 default_sa.go:45] found service account: "default"
	I0906 15:14:44.641825   29027 default_sa.go:55] duration metric: took 195.019776ms for default service account to be created ...
	I0906 15:14:44.641830   29027 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:14:44.838043   29027 request.go:533] Waited for 196.177236ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.838100   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/namespaces/kube-system/pods
	I0906 15:14:44.838106   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:44.838132   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:44.838144   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:44.842231   29027 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 15:14:44.842241   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:44.842247   29027 round_trippers.go:580]     Audit-Id: 2cb0098d-6c33-4bcd-981b-da5acd2add2e
	I0906 15:14:44.842252   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:44.842257   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:44.842264   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:44.842268   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:44.842277   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:44 GMT
	I0906 15:14:44.843873   29027 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"coredns-565d847f94-t6l66","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"3d3ced34-e06b-4586-8c69-2f495e1290dd","resourceVersion":"1147","creationTimestamp":"2022-09-06T22:06:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"45018623-1704-4db7-9d18-1942ef52b7d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-06T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45018623-1704-4db7-9d18-1942ef52b7d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86207 chars]
	I0906 15:14:44.846190   29027 system_pods.go:86] 12 kube-system pods found
	I0906 15:14:44.846203   29027 system_pods.go:89] "coredns-565d847f94-t6l66" [3d3ced34-e06b-4586-8c69-2f495e1290dd] Running
	I0906 15:14:44.846208   29027 system_pods.go:89] "etcd-multinode-20220906150606-22187" [b5bff5e3-326e-47d3-b2ab-d66b2c5f72aa] Running
	I0906 15:14:44.846212   29027 system_pods.go:89] "kindnet-cddz8" [923124b2-caa0-495b-ad35-ac13cb527604] Running
	I0906 15:14:44.846216   29027 system_pods.go:89] "kindnet-jkg8p" [5b1442a6-fdf2-4766-a927-f1213c27550b] Running
	I0906 15:14:44.846219   29027 system_pods.go:89] "kindnet-nh9r5" [bae0c657-7cfe-416f-bbcd-b3d229bd137a] Running
	I0906 15:14:44.846223   29027 system_pods.go:89] "kube-apiserver-multinode-20220906150606-22187" [b8fcee55-a96c-4a49-9872-f5c791daf820] Running
	I0906 15:14:44.846227   29027 system_pods.go:89] "kube-controller-manager-multinode-20220906150606-22187" [d9ca106c-c765-4535-9cda-609a956ab91d] Running
	I0906 15:14:44.846232   29027 system_pods.go:89] "kube-proxy-czbjx" [c88daf0a-05d7-45b7-b888-8e0749e4d321] Running
	I0906 15:14:44.846235   29027 system_pods.go:89] "kube-proxy-kkmpm" [0b228e9a-6577-46a3-b848-9c9fca602ba6] Running
	I0906 15:14:44.846239   29027 system_pods.go:89] "kube-proxy-wnrrx" [260cbcc2-7110-48ce-aa3d-482b3694ae6d] Running
	I0906 15:14:44.846257   29027 system_pods.go:89] "kube-scheduler-multinode-20220906150606-22187" [ada7d5af-ae80-465b-b63c-866ee9dbba95] Running
	I0906 15:14:44.846266   29027 system_pods.go:89] "storage-provisioner" [cf24b814-e576-465e-9c3e-f8c04c05c695] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 15:14:44.846272   29027 system_pods.go:126] duration metric: took 204.437402ms to wait for k8s-apps to be running ...
	I0906 15:14:44.846278   29027 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:14:44.846326   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:14:44.855759   29027 system_svc.go:56] duration metric: took 9.476035ms WaitForService to wait for kubelet.
	I0906 15:14:44.855772   29027 kubeadm.go:573] duration metric: took 29.700208469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:14:44.855788   29027 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:14:45.038429   29027 request.go:533] Waited for 182.602775ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:45.038480   29027 round_trippers.go:463] GET https://127.0.0.1:57276/api/v1/nodes
	I0906 15:14:45.038486   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:45.038494   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:45.038510   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:45.041650   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:45.041662   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:45.041667   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:45.041672   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:45.041678   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:45 GMT
	I0906 15:14:45.041683   29027 round_trippers.go:580]     Audit-Id: 7033247f-d261-41dd-8f59-2bddffd0c32f
	I0906 15:14:45.041687   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:45.041692   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:45.041756   29027 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1153"},"items":[{"metadata":{"name":"multinode-20220906150606-22187","uid":"e59741f9-22d9-4ce7-a1b6-c1caa8773d0d","resourceVersion":"1045","creationTimestamp":"2022-09-06T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220906150606-22187","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b03dd9a575222c1597a06c17f8fb0088dcad17c4","minikube.k8s.io/name":"multinode-20220906150606-22187","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_09_06T15_06_36_0700","minikube.k8s.io/version":"v1.26.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller
-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet"," [truncated 10244 chars]
	I0906 15:14:45.042047   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:45.042056   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:45.042063   29027 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:14:45.042066   29027 node_conditions.go:123] node cpu capacity is 6
	I0906 15:14:45.042069   29027 node_conditions.go:105] duration metric: took 186.276039ms to run NodePressure ...
	I0906 15:14:45.042078   29027 start.go:216] waiting for startup goroutines ...
	I0906 15:14:45.042679   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:45.042742   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.064900   29027 out.go:177] * Starting worker node multinode-20220906150606-22187-m02 in cluster multinode-20220906150606-22187
	I0906 15:14:45.086431   29027 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:14:45.107329   29027 out.go:177] * Pulling base image ...
	I0906 15:14:45.149556   29027 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:14:45.149563   29027 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:14:45.149596   29027 cache.go:57] Caching tarball of preloaded images
	I0906 15:14:45.149757   29027 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:14:45.149780   29027 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:14:45.150696   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.213521   29027 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:14:45.213549   29027 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:14:45.213559   29027 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:14:45.213601   29027 start.go:364] acquiring machines lock for multinode-20220906150606-22187-m02: {Name:mk634e5142ae9a72af4ccf4e417277befcfbdc1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:14:45.213679   29027 start.go:368] acquired machines lock for "multinode-20220906150606-22187-m02" in 67.581µs
	I0906 15:14:45.213696   29027 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:14:45.213701   29027 fix.go:55] fixHost starting: m02
	I0906 15:14:45.213937   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:14:45.277559   29027 fix.go:103] recreateIfNeeded on multinode-20220906150606-22187-m02: state=Stopped err=<nil>
	W0906 15:14:45.277580   29027 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:14:45.299176   29027 out.go:177] * Restarting existing docker container for "multinode-20220906150606-22187-m02" ...
	I0906 15:14:45.341287   29027 cli_runner.go:164] Run: docker start multinode-20220906150606-22187-m02
	I0906 15:14:45.685256   29027 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:14:45.750301   29027 kic.go:415] container "multinode-20220906150606-22187-m02" state is running.
	I0906 15:14:45.750888   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:45.818341   29027 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/config.json ...
	I0906 15:14:45.818831   29027 machine.go:88] provisioning docker machine ...
	I0906 15:14:45.818848   29027 ubuntu.go:169] provisioning hostname "multinode-20220906150606-22187-m02"
	I0906 15:14:45.818937   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:45.892246   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:45.892421   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:45.892436   29027 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220906150606-22187-m02 && echo "multinode-20220906150606-22187-m02" | sudo tee /etc/hostname
	I0906 15:14:46.028099   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220906150606-22187-m02
	
	I0906 15:14:46.028170   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.093016   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.093233   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.093255   29027 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220906150606-22187-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220906150606-22187-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220906150606-22187-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:14:46.203928   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:14:46.203950   29027 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:14:46.203966   29027 ubuntu.go:177] setting up certificates
	I0906 15:14:46.203975   29027 provision.go:83] configureAuth start
	I0906 15:14:46.204050   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:46.270597   29027 provision.go:138] copyHostCerts
	I0906 15:14:46.270653   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:14:46.270706   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:14:46.270714   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:14:46.270810   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:14:46.270963   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:14:46.270994   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:14:46.270999   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:14:46.271059   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:14:46.271172   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:14:46.271199   29027 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:14:46.271203   29027 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:14:46.271259   29027 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:14:46.271374   29027 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.multinode-20220906150606-22187-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220906150606-22187-m02]
	I0906 15:14:46.374806   29027 provision.go:172] copyRemoteCerts
	I0906 15:14:46.374879   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:14:46.374958   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.444609   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:46.531684   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 15:14:46.531748   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:14:46.549421   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 15:14:46.549491   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0906 15:14:46.566040   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 15:14:46.566107   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:14:46.583648   29027 provision.go:86] duration metric: configureAuth took 379.661276ms
	I0906 15:14:46.583660   29027 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:14:46.583852   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:46.583909   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.648048   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.648226   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.648237   29027 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:14:46.769847   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:14:46.769862   29027 ubuntu.go:71] root file system type: overlay
	I0906 15:14:46.770004   29027 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:14:46.770082   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:46.834190   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:46.834349   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:46.834414   29027 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:14:46.957738   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:14:46.957823   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.021758   29027 main.go:134] libmachine: Using SSH client type: native
	I0906 15:14:47.021919   29027 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57304 <nil> <nil>}
	I0906 15:14:47.021933   29027 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:14:47.137286   29027 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:14:47.137302   29027 machine.go:91] provisioned docker machine in 1.318458177s
	I0906 15:14:47.137308   29027 start.go:300] post-start starting for "multinode-20220906150606-22187-m02" (driver="docker")
	I0906 15:14:47.137314   29027 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:14:47.137368   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:14:47.137412   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.203899   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.286542   29027 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:14:47.289824   29027 command_runner.go:130] > NAME="Ubuntu"
	I0906 15:14:47.289833   29027 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0906 15:14:47.289836   29027 command_runner.go:130] > ID=ubuntu
	I0906 15:14:47.289840   29027 command_runner.go:130] > ID_LIKE=debian
	I0906 15:14:47.289843   29027 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0906 15:14:47.289847   29027 command_runner.go:130] > VERSION_ID="20.04"
	I0906 15:14:47.289851   29027 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 15:14:47.289855   29027 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 15:14:47.289859   29027 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 15:14:47.289875   29027 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 15:14:47.289879   29027 command_runner.go:130] > VERSION_CODENAME=focal
	I0906 15:14:47.289882   29027 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0906 15:14:47.289925   29027 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:14:47.289936   29027 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:14:47.289947   29027 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:14:47.289952   29027 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:14:47.289958   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:14:47.290073   29027 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:14:47.290204   29027 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:14:47.290212   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /etc/ssl/certs/221872.pem
	I0906 15:14:47.290353   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:14:47.297939   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:14:47.315017   29027 start.go:303] post-start completed in 177.699595ms
	I0906 15:14:47.315088   29027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:14:47.315180   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.378993   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.462532   29027 command_runner.go:130] > 11%!
	(MISSING)I0906 15:14:47.462993   29027 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:14:47.466982   29027 command_runner.go:130] > 50G
	I0906 15:14:47.467253   29027 fix.go:57] fixHost completed within 2.25354269s
	I0906 15:14:47.467264   29027 start.go:83] releasing machines lock for "multinode-20220906150606-22187-m02", held for 2.253569942s
	I0906 15:14:47.467329   29027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:14:47.552548   29027 out.go:177] * Found network options:
	I0906 15:14:47.574060   29027 out.go:177]   - NO_PROXY=192.168.58.2
	W0906 15:14:47.595328   29027 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 15:14:47.595374   29027 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 15:14:47.595549   29027 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:14:47.595557   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:14:47.595642   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.595643   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:14:47.663985   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.664118   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57304 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:14:47.788186   29027 command_runner.go:130] > <a href="https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)">Temporary Redirect</a>.
	I0906 15:14:47.791338   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0906 15:14:47.805508   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:47.872738   29027 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:14:47.965571   29027 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:14:47.976781   29027 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0906 15:14:47.977245   29027 command_runner.go:130] > [Unit]
	I0906 15:14:47.977260   29027 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 15:14:47.977273   29027 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 15:14:47.977281   29027 command_runner.go:130] > BindsTo=containerd.service
	I0906 15:14:47.977288   29027 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0906 15:14:47.977294   29027 command_runner.go:130] > Wants=network-online.target
	I0906 15:14:47.977303   29027 command_runner.go:130] > Requires=docker.socket
	I0906 15:14:47.977309   29027 command_runner.go:130] > StartLimitBurst=3
	I0906 15:14:47.977312   29027 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 15:14:47.977315   29027 command_runner.go:130] > [Service]
	I0906 15:14:47.977320   29027 command_runner.go:130] > Type=notify
	I0906 15:14:47.977327   29027 command_runner.go:130] > Restart=on-failure
	I0906 15:14:47.977347   29027 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0906 15:14:47.977360   29027 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 15:14:47.977374   29027 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 15:14:47.977387   29027 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 15:14:47.977405   29027 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 15:14:47.977415   29027 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 15:14:47.977423   29027 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 15:14:47.977433   29027 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 15:14:47.977442   29027 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 15:14:47.977450   29027 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 15:14:47.977454   29027 command_runner.go:130] > ExecStart=
	I0906 15:14:47.977465   29027 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0906 15:14:47.977471   29027 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 15:14:47.977478   29027 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 15:14:47.977483   29027 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 15:14:47.977486   29027 command_runner.go:130] > LimitNOFILE=infinity
	I0906 15:14:47.977490   29027 command_runner.go:130] > LimitNPROC=infinity
	I0906 15:14:47.977493   29027 command_runner.go:130] > LimitCORE=infinity
	I0906 15:14:47.977499   29027 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 15:14:47.977504   29027 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 15:14:47.977507   29027 command_runner.go:130] > TasksMax=infinity
	I0906 15:14:47.977515   29027 command_runner.go:130] > TimeoutStartSec=0
	I0906 15:14:47.977520   29027 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 15:14:47.977524   29027 command_runner.go:130] > Delegate=yes
	I0906 15:14:47.977534   29027 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 15:14:47.977541   29027 command_runner.go:130] > KillMode=process
	I0906 15:14:47.977556   29027 command_runner.go:130] > [Install]
	I0906 15:14:47.977572   29027 command_runner.go:130] > WantedBy=multi-user.target
	I0906 15:14:47.979109   29027 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:14:47.979154   29027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:14:47.988414   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:14:48.000289   29027 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:14:48.000300   29027 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 15:14:48.000953   29027 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:14:48.072271   29027 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:14:48.142544   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:48.205608   29027 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:14:48.432967   29027 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:14:48.498398   29027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:14:48.562390   29027 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:14:48.571790   29027 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:14:48.571856   29027 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:14:48.575594   29027 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 15:14:48.575606   29027 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 15:14:48.575615   29027 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 130         Links: 1
	I0906 15:14:48.575625   29027 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0906 15:14:48.575633   29027 command_runner.go:130] > Access: 2022-09-06 22:14:47.920648036 +0000
	I0906 15:14:48.575638   29027 command_runner.go:130] > Modify: 2022-09-06 22:14:47.892648038 +0000
	I0906 15:14:48.575643   29027 command_runner.go:130] > Change: 2022-09-06 22:14:47.897648038 +0000
	I0906 15:14:48.575646   29027 command_runner.go:130] >  Birth: -
	I0906 15:14:48.575760   29027 start.go:471] Will wait 60s for crictl version
	I0906 15:14:48.575805   29027 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:14:48.601488   29027 command_runner.go:130] > Version:  0.1.0
	I0906 15:14:48.601514   29027 command_runner.go:130] > RuntimeName:  docker
	I0906 15:14:48.601522   29027 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0906 15:14:48.601537   29027 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0906 15:14:48.603289   29027 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:14:48.603351   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:14:48.636740   29027 command_runner.go:130] > 20.10.17
	I0906 15:14:48.639507   29027 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:14:48.671783   29027 command_runner.go:130] > 20.10.17
	I0906 15:14:48.719080   29027 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:14:48.740144   29027 out.go:177]   - env NO_PROXY=192.168.58.2
	I0906 15:14:48.761288   29027 cli_runner.go:164] Run: docker exec -t multinode-20220906150606-22187-m02 dig +short host.docker.internal
	I0906 15:14:48.883296   29027 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:14:48.883380   29027 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:14:48.887746   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:14:48.897032   29027 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187 for IP: 192.168.58.3
	I0906 15:14:48.897185   29027 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:14:48.897235   29027 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:14:48.897242   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 15:14:48.897263   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 15:14:48.897281   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 15:14:48.897329   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 15:14:48.897448   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:14:48.897499   29027 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:14:48.897512   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:14:48.897551   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:14:48.897584   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:14:48.897617   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:14:48.897685   29027 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:14:48.897720   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> /usr/share/ca-certificates/221872.pem
	I0906 15:14:48.897744   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:48.897761   29027 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem -> /usr/share/ca-certificates/22187.pem
	I0906 15:14:48.898077   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:14:48.916608   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:14:48.932692   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:14:48.950240   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:14:48.966949   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:14:48.984250   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:14:49.000698   29027 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:14:49.017011   29027 ssh_runner.go:195] Run: openssl version
	I0906 15:14:49.022038   29027 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0906 15:14:49.022238   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:14:49.029845   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033599   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033622   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.033662   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:14:49.038469   29027 command_runner.go:130] > b5213941
	I0906 15:14:49.038723   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:14:49.045631   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:14:49.053075   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056828   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056844   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.056882   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:14:49.061737   29027 command_runner.go:130] > 51391683
	I0906 15:14:49.062373   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:14:49.070334   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:14:49.078258   29027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.081954   29027 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.081973   29027 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.082014   29027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:14:49.086807   29027 command_runner.go:130] > 3ec20f2e
	I0906 15:14:49.087132   29027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:14:49.094065   29027 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:14:49.165274   29027 command_runner.go:130] > systemd
	I0906 15:14:49.168628   29027 cni.go:95] Creating CNI manager for ""
	I0906 15:14:49.168638   29027 cni.go:156] 2 nodes found, recommending kindnet
	I0906 15:14:49.168656   29027 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:14:49.168668   29027 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220906150606-22187 NodeName:multinode-20220906150606-22187-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:14:49.168753   29027 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220906150606-22187-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:14:49.168795   29027 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220906150606-22187-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:14:49.168853   29027 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:14:49.175596   29027 command_runner.go:130] > kubeadm
	I0906 15:14:49.175605   29027 command_runner.go:130] > kubectl
	I0906 15:14:49.175609   29027 command_runner.go:130] > kubelet
	I0906 15:14:49.176427   29027 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:14:49.176477   29027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0906 15:14:49.183422   29027 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0906 15:14:49.196163   29027 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:14:49.209908   29027 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:14:49.213641   29027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:14:49.222875   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:49.223063   29027 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:14:49.223064   29027 start.go:285] JoinCluster: &{Name:multinode-20220906150606-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:multinode-20220906150606-22187 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false p
od-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:14:49.223130   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 15:14:49.223175   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:49.288153   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:49.429582   29027 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:14:49.434195   29027 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:49.434220   29027 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:14:49.434452   29027 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0906 15:14:49.434493   29027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:14:49.500020   29027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57272 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:14:49.642697   29027 command_runner.go:130] > node/multinode-20220906150606-22187-m02 cordoned
	I0906 15:14:52.657820   29027 command_runner.go:130] > pod "busybox-65db55d5d6-rqxp8" has DeletionTimestamp older than 1 seconds, skipping
	I0906 15:14:52.657834   29027 command_runner.go:130] > node/multinode-20220906150606-22187-m02 drained
	I0906 15:14:52.661056   29027 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0906 15:14:52.661071   29027 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cddz8, kube-system/kube-proxy-wnrrx
	I0906 15:14:52.661096   29027 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl drain multinode-20220906150606-22187-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.226609366s)
	I0906 15:14:52.661109   29027 node.go:109] successfully drained node "m02"
	I0906 15:14:52.661411   29027 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:14:52.661610   29027 kapi.go:59] client config for multinode-20220906150606-22187: &rest.Config{Host:"https://127.0.0.1:57276", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-20220906150606-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/multinode-2022090615060
6-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:14:52.661857   29027 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0906 15:14:52.661883   29027 round_trippers.go:463] DELETE https://127.0.0.1:57276/api/v1/nodes/multinode-20220906150606-22187-m02
	I0906 15:14:52.661887   29027 round_trippers.go:469] Request Headers:
	I0906 15:14:52.661894   29027 round_trippers.go:473]     Content-Type: application/json
	I0906 15:14:52.661906   29027 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 15:14:52.661911   29027 round_trippers.go:473]     Accept: application/json, */*
	I0906 15:14:52.665187   29027 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 15:14:52.665199   29027 round_trippers.go:577] Response Headers:
	I0906 15:14:52.665204   29027 round_trippers.go:580]     Audit-Id: 3457ee85-e95d-4e94-93c5-abb29c0d4891
	I0906 15:14:52.665210   29027 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 15:14:52.665215   29027 round_trippers.go:580]     Content-Type: application/json
	I0906 15:14:52.665219   29027 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa3353c2-4321-419a-891b-36545235008e
	I0906 15:14:52.665224   29027 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e478b3d-8773-43a1-a06c-9f82173aadc2
	I0906 15:14:52.665231   29027 round_trippers.go:580]     Content-Length: 185
	I0906 15:14:52.665236   29027 round_trippers.go:580]     Date: Tue, 06 Sep 2022 22:14:52 GMT
	I0906 15:14:52.665249   29027 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220906150606-22187-m02","kind":"nodes","uid":"0cd805fb-0749-46b4-a7e3-90583fb06a8a"}}
	I0906 15:14:52.665267   29027 node.go:125] successfully deleted node "m02"
	I0906 15:14:52.665274   29027 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:52.665286   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:14:52.665297   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:14:52.698107   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:14:52.799738   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:14:52.799761   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:14:52.827532   29027 command_runner.go:130] ! W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:14:52.827545   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:14:52.827558   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:14:52.827564   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:14:52.827569   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:14:52.827575   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:14:52.827584   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:14:52.827590   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:14:52.827634   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.827644   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:14:52.827655   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:14:52.871345   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:14:52.871365   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.871387   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:14:52.871409   29027 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:14:52.708626    1098 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:03.919365   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:03.919443   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:03.956243   29027 command_runner.go:130] ! W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:03.956951   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:03.982039   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:03.986535   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:04.043999   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:04.044014   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:04.070699   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:04.070711   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.073761   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:04.073777   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:04.073784   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:04.073808   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.073825   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:04.073833   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:04.110982   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:04.110995   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.111009   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:04.111019   29027 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:03.974898    1512 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.718841   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:25.718875   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:25.753192   29027 command_runner.go:130] ! W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:25.753207   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:25.776083   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:25.780723   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:25.838052   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:25.838067   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:25.863965   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:25.863984   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.866695   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:25.866706   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:25.866714   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:25.866744   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.866752   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:25.866759   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:25.901532   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:25.901546   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.901560   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:25.901572   29027 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:25.765845    2006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.104629   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:15:52.120910   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:15:52.156464   29027 command_runner.go:130] ! W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:15:52.156535   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:15:52.180408   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:15:52.185678   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:15:52.244879   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:15:52.244892   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:15:52.270041   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:15:52.270054   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.273029   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:15:52.273041   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:15:52.273047   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0906 15:15:52.273074   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.273082   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:15:52.273090   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:15:52.310975   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:15:52.310988   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.311003   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:15:52.311015   29027 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:15:52.166575    2284 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:23.961128   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:16:23.961248   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:16:23.997795   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:16:24.094079   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:16:24.094111   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:16:24.111007   29027 command_runner.go:130] ! W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:16:24.111022   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:16:24.111035   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:16:24.111039   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:16:24.111044   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:16:24.111050   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:16:24.111065   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:16:24.111072   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:16:24.111113   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.111123   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:16:24.111133   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:16:24.148203   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:16:24.148216   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.148232   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:16:24.148244   29027 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:16:24.008417    2614 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:10.960353   29027 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0906 15:17:10.960440   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02"
	I0906 15:17:10.995679   29027 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 15:17:11.095742   29027 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 15:17:11.095763   29027 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 15:17:11.113200   29027 command_runner.go:130] ! W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:17:11.113214   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0906 15:17:11.113225   29027 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:17:11.113230   29027 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:17:11.113236   29027 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0906 15:17:11.113242   29027 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0906 15:17:11.113252   29027 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0906 15:17:11.113257   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0906 15:17:11.113285   29027 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.113292   29027 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0906 15:17:11.113302   29027 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force"
	I0906 15:17:11.152058   29027 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0906 15:17:11.152071   29027 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.152085   29027 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0906 15:17:11.152100   29027 start.go:287] JoinCluster complete in 2m21.928535342s
	I0906 15:17:11.174107   29027 out.go:177] 
	W0906 15:17:11.195219   29027 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wg14hu.g93d1rvzo1g9ef46 --discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220906150606-22187-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0906 22:17:10.997798    3044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220906150606-22187-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:17:11.195252   29027 out.go:239] * 
	W0906 15:17:11.196016   29027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:17:11.281123   29027 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:13:48 UTC, end at Tue 2022-09-06 22:17:13 UTC. --
	Sep 06 22:13:50 multinode-20220906150606-22187 dockerd[132]: time="2022-09-06T22:13:50.957118757Z" level=info msg="Daemon shutdown complete"
	Sep 06 22:13:50 multinode-20220906150606-22187 dockerd[132]: time="2022-09-06T22:13:50.957197652Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 22:13:50 multinode-20220906150606-22187 systemd[1]: docker.service: Succeeded.
	Sep 06 22:13:50 multinode-20220906150606-22187 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 22:13:50 multinode-20220906150606-22187 systemd[1]: docker.service: Consumed 1.130s CPU time.
	Sep 06 22:13:50 multinode-20220906150606-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.012321408Z" level=info msg="Starting up"
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.014226479Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.014296759Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.014319707Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.014329700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.015579164Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.015613112Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.015631189Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.015640019Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.020155830Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.025723540Z" level=info msg="Loading containers: start."
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.134616876Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.168388415Z" level=info msg="Loading containers: done."
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.178523748Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.178609724Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:13:51 multinode-20220906150606-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.201027287Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:13:51 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:13:51.203558642Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:14:34 multinode-20220906150606-22187 dockerd[641]: time="2022-09-06T22:14:34.385951790Z" level=info msg="ignoring event" container=32ecfafa90b955863dfcad23baa2f88914ab3444880d27a5c3b6e47414bc1060 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	06df22c3a5d04       6e38f40d628db       2 minutes ago       Running             storage-provisioner       4                   599705f4f546c
	5f4b9a79ce9e9       8c811b4aec35f       3 minutes ago       Running             busybox                   2                   1753e9a346a81
	fc45407356d89       58a9a0c6d96f2       3 minutes ago       Running             kube-proxy                2                   98e5922b4de87
	b382674bb1007       5185b96f0becf       3 minutes ago       Running             coredns                   2                   e4319b7bffa27
	2b0df3670e82e       d921cee849482       3 minutes ago       Running             kindnet-cni               2                   c07b4408a6a3c
	32ecfafa90b95       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       3                   599705f4f546c
	4542088d56315       a8a176a5d5d69       3 minutes ago       Running             etcd                      2                   53945de12c297
	6f8616bc1e9db       1a54c86c03a67       3 minutes ago       Running             kube-controller-manager   2                   b123347bc7fd6
	a90f34cc5b1b9       bef2cf3115095       3 minutes ago       Running             kube-scheduler            2                   932c03badaa10
	caa69a7d2004a       4d2edfd10d3e3       3 minutes ago       Running             kube-apiserver            2                   5f2dbc6c91f14
	06ab6cf627e88       d921cee849482       7 minutes ago       Exited              kindnet-cni               1                   c1eee0e53b49b
	d759aa3a43843       8c811b4aec35f       7 minutes ago       Exited              busybox                   1                   0f037fd738e3b
	803ede0924699       58a9a0c6d96f2       7 minutes ago       Exited              kube-proxy                1                   e266c748731b9
	af277a5518c67       5185b96f0becf       7 minutes ago       Exited              coredns                   1                   4f1337150041c
	4c8a1f372186f       1a54c86c03a67       7 minutes ago       Exited              kube-controller-manager   1                   9456ca1d4c44a
	3c8f51d8691c7       a8a176a5d5d69       7 minutes ago       Exited              etcd                      1                   22c8f9d461788
	ef78db90e1cfa       bef2cf3115095       7 minutes ago       Exited              kube-scheduler            1                   8cecea8208ec0
	62ca7e8901de2       4d2edfd10d3e3       7 minutes ago       Exited              kube-apiserver            1                   c20d3976c12a9
	
	* 
	* ==> coredns [af277a5518c6] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b382674bb100] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220906150606-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220906150606-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=multinode-20220906150606-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_06_36_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:06:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220906150606-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:14:02 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:14:02 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:14:02 +0000   Tue, 06 Sep 2022 22:06:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:14:02 +0000   Tue, 06 Sep 2022 22:07:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-20220906150606-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                ece1d71d-4751-4899-8609-9a55b2eb3fdc
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-trdqs                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 coredns-565d847f94-t6l66                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-multinode-20220906150606-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-nh9r5                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-20220906150606-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-20220906150606-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-kkmpm                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-20220906150606-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  Starting                 7m5s                   kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x4 over 10m)      kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x4 over 10m)      kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x4 over 10m)      kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-20220906150606-22187 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m56s                  node-controller  Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller
	  Normal  Starting                 3m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m16s (x8 over 3m17s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m16s (x8 over 3m17s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m16s (x7 over 3m17s)  kubelet          Node multinode-20220906150606-22187 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m59s                  node-controller  Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller
	
	
	Name:               multinode-20220906150606-22187-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220906150606-22187-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220906150606-22187-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:17:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:14:52 +0000   Tue, 06 Sep 2022 22:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:14:52 +0000   Tue, 06 Sep 2022 22:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:14:52 +0000   Tue, 06 Sep 2022 22:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:14:52 +0000   Tue, 06 Sep 2022 22:14:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-20220906150606-22187-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                9b18602e-693b-4709-ad03-6dd20ccb7ab5
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-8r9qs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-cddz8               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m40s
	  kube-system                 kube-proxy-wnrrx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 9m30s                  kube-proxy  
	  Normal  Starting                 2m6s                   kube-proxy  
	  Normal  Starting                 6m8s                   kube-proxy  
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m53s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m53s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m30s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m30s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m23s (x7 over 6m30s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m23s (x7 over 6m30s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m27s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m27s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet     Node multinode-20220906150606-22187-m02 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001536] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001105] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001751] FS-Cache: N-cookie d=000000006f57a5f8 n=0000000004119ae2
	[  +0.001424] FS-Cache: N-key=[8] '89c5800300000000'
	[  +0.002109] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000d596ead8 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001797] FS-Cache: O-cookie d=000000006f57a5f8 n=00000000f83b458d
	[  +0.001466] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001134] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001810] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001458] FS-Cache: N-key=[8] '89c5800300000000'
	[  +3.680989] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=000000003a8c8805 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000057637cac
	[  +0.001460] FS-Cache: O-key=[8] '88c5800300000000'
	[  +0.001144] FS-Cache: N-cookie c=000000000ab19587 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001761] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001454] FS-Cache: N-key=[8] '88c5800300000000'
	[  +0.676412] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000dd15d770 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000060e892c8
	[  +0.001441] FS-Cache: O-key=[8] '93c5800300000000'
	[  +0.001122] FS-Cache: N-cookie c=00000000e728d4f6 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=000000006f57a5f8 n=000000009b87565f
	[  +0.001438] FS-Cache: N-key=[8] '93c5800300000000'
	
	* 
	* ==> etcd [3c8f51d8691c] <==
	* {"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:10:02.449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:10:03.641Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:10:03.641Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-09-06T22:10:03.640Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220906150606-22187 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:10:03.643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:10:03.643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:13:22.350Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:13:22.350Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220906150606-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/09/06 22:13:22 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:13:22 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:13:22.367Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-09-06T22:13:22.368Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:13:22.369Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:13:22.369Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-20220906150606-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [4542088d5631] <==
	* {"level":"info","ts":"2022-09-06T22:13:58.400Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:13:58.400Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:13:58.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-09-06T22:13:58.401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:13:58.401Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:13:58.401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:13:58.450Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:13:58.450Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:13:58.450Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:13:58.451Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:13:58.451Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 3"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 4"}
	{"level":"info","ts":"2022-09-06T22:14:00.191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-09-06T22:14:00.193Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220906150606-22187 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:14:00.193Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:14:00.193Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:14:00.194Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:14:00.194Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:14:00.194Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:14:00.195Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:17:14 up 33 min,  0 users,  load average: 0.18, 0.50, 0.52
	Linux multinode-20220906150606-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [62ca7e8901de] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:13:22.363623       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:13:22.363652       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:13:22.364457       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [caa69a7d2004] <==
	* I0906 22:14:01.723291       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0906 22:14:01.723347       1 controller.go:85] Starting OpenAPI controller
	I0906 22:14:01.723378       1 controller.go:85] Starting OpenAPI V3 controller
	I0906 22:14:01.723418       1 naming_controller.go:291] Starting NamingConditionController
	I0906 22:14:01.723452       1 establishing_controller.go:76] Starting EstablishingController
	I0906 22:14:01.723493       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 22:14:01.723523       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 22:14:01.723533       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 22:14:01.803546       1 cache.go:39] Caches are synced for autoregister controller
	I0906 22:14:01.803674       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:14:01.803730       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 22:14:01.805110       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:14:01.823460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:14:01.823634       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:14:01.823953       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 22:14:01.853736       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:14:02.520564       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:14:02.705912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:14:03.857755       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:14:03.988867       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:14:03.999718       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:14:04.168626       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:14:04.174761       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:14:14.088376       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:14:14.288035       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [4c8a1f372186] <==
	* I0906 22:10:17.745575       1 shared_informer.go:262] Caches are synced for endpoint
	I0906 22:10:17.750055       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:10:17.810068       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:10:17.819089       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 22:10:17.819168       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 22:10:17.819216       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 22:10:17.819246       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 22:10:17.822030       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0906 22:10:17.846702       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:10:17.911133       1 shared_informer.go:262] Caches are synced for service account
	I0906 22:10:17.920994       1 shared_informer.go:262] Caches are synced for namespace
	I0906 22:10:18.260188       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:10:18.325585       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:10:18.325623       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:10:47.227208       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-qmjcf"
	W0906 22:10:50.236576       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m03 node
	W0906 22:10:50.313231       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m02" does not exist
	W0906 22:10:50.313545       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:10:50.317403       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m02 PodCIDR to [10.244.1.0/24]
	W0906 22:10:57.710141       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	I0906 22:10:57.710562       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220906150606-22187-m03 status is now: NodeNotReady"
	I0906 22:10:57.715292       1 event.go:294] "Event occurred" object="kube-system/kindnet-jkg8p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0906 22:10:57.720316       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-czbjx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0906 22:13:14.662867       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-rqxp8"
	W0906 22:13:17.676039       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	
	* 
	* ==> kube-controller-manager [6f8616bc1e9d] <==
	* I0906 22:14:14.126660       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 22:14:14.143662       1 shared_informer.go:262] Caches are synced for taint
	I0906 22:14:14.143727       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0906 22:14:14.144146       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-20220906150606-22187. Assuming now as a timestamp.
	W0906 22:14:14.144191       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-20220906150606-22187-m02. Assuming now as a timestamp.
	I0906 22:14:14.144208       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0906 22:14:14.143839       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0906 22:14:14.144073       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220906150606-22187 event: Registered Node multinode-20220906150606-22187 in Controller"
	I0906 22:14:14.144296       1 event.go:294] "Event occurred" object="multinode-20220906150606-22187-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220906150606-22187-m02 event: Registered Node multinode-20220906150606-22187-m02 in Controller"
	I0906 22:14:14.144437       1 taint_manager.go:209] "Sending events to api server"
	I0906 22:14:14.146354       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:14:14.150957       1 shared_informer.go:262] Caches are synced for GC
	I0906 22:14:14.463934       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:14:14.515523       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:14:14.515593       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:14:49.667557       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-8r9qs"
	W0906 22:14:52.812880       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220906150606-22187-m02 node
	W0906 22:14:52.812927       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220906150606-22187-m02" does not exist
	I0906 22:14:52.816299       1 range_allocator.go:367] Set node multinode-20220906150606-22187-m02 PodCIDR to [10.244.1.0/24]
	I0906 22:14:54.139017       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-czbjx"
	I0906 22:14:54.143039       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-czbjx"
	I0906 22:14:54.143071       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-jkg8p"
	I0906 22:14:54.147097       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-jkg8p"
	I0906 22:14:54.147130       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="default/busybox-65db55d5d6-qmjcf"
	I0906 22:14:54.150381       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="default/busybox-65db55d5d6-qmjcf"
	
	* 
	* ==> kube-proxy [803ede092469] <==
	* I0906 22:10:08.212824       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0906 22:10:08.212891       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0906 22:10:08.212969       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:10:08.240697       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:10:08.240756       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:10:08.240763       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:10:08.240772       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:10:08.240792       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:10:08.240986       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:10:08.241179       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:10:08.241188       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:10:08.242676       1 config.go:317] "Starting service config controller"
	I0906 22:10:08.242713       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:10:08.242731       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:10:08.242734       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:10:08.243279       1 config.go:444] "Starting node config controller"
	I0906 22:10:08.243307       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:10:08.343321       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:10:08.343393       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:10:08.343690       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fc45407356d8] <==
	* I0906 22:14:04.693452       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0906 22:14:04.693529       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0906 22:14:04.693564       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:14:04.714893       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:14:04.714931       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:14:04.714938       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:14:04.714946       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:14:04.714960       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:14:04.715664       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:14:04.716310       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:14:04.716369       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:14:04.717178       1 config.go:317] "Starting service config controller"
	I0906 22:14:04.717226       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:14:04.717239       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:14:04.717254       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:14:04.717988       1 config.go:444] "Starting node config controller"
	I0906 22:14:04.718042       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:14:04.818240       1 shared_informer.go:262] Caches are synced for node config
	I0906 22:14:04.818312       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:14:04.818369       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a90f34cc5b1b] <==
	* I0906 22:13:57.876480       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:14:01.755543       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:14:01.755718       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:14:01.755763       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:14:01.755779       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:14:01.767002       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:14:01.767038       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:14:01.768087       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:14:01.768227       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:14:01.768315       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:14:01.768461       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0906 22:14:01.769007       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 22:14:01.770846       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 22:14:01.774267       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:14:01.774316       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:14:01.774402       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:14:01.774432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 22:14:01.774914       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 22:14:01.774957       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0906 22:14:01.869059       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ef78db90e1cf] <==
	* I0906 22:10:03.522056       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:10:05.430779       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:10:05.431382       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:10:05.431579       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:10:05.431687       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:10:05.444404       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:10:05.444441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:10:05.445894       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:10:05.445934       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:10:05.446107       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:10:05.448757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:10:05.546127       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:13:22.353570       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0906 22:13:22.353664       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0906 22:13:22.353745       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0906 22:13:22.354054       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:13:48 UTC, end at Tue 2022-09-06 22:17:15 UTC. --
	Sep 06 22:14:01 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:01.810558    1196 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdlc\" (UniqueName: \"kubernetes.io/projected/1cf732b5-70cb-44d1-acf9-34a0abad6541-kube-api-access-jmdlc\") pod \"busybox-65db55d5d6-trdqs\" (UID: \"1cf732b5-70cb-44d1-acf9-34a0abad6541\") " pod="default/busybox-65db55d5d6-trdqs"
	Sep 06 22:14:01 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:01.810568    1196 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:14:01 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:01.810643    1196 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:02.154537    1196 kubelet_node_status.go:108] "Node was previously registered" node="multinode-20220906150606-22187"
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:02.154658    1196 kubelet_node_status.go:73] "Successfully registered node" node="multinode-20220906150606-22187"
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:02.912666    1196 configmap.go:197] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:02.912720    1196 configmap.go:197] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:02.912762    1196 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-proxy podName:0b228e9a-6577-46a3-b848-9c9fca602ba6 nodeName:}" failed. No retries permitted until 2022-09-06 22:14:03.41274433 +0000 UTC m=+6.725884369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-proxy") pod "kube-proxy-kkmpm" (UID: "0b228e9a-6577-46a3-b848-9c9fca602ba6") : failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:02.912779    1196 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3d3ced34-e06b-4586-8c69-2f495e1290dd-config-volume podName:3d3ced34-e06b-4586-8c69-2f495e1290dd nodeName:}" failed. No retries permitted until 2022-09-06 22:14:03.412771952 +0000 UTC m=+6.725911988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3d3ced34-e06b-4586-8c69-2f495e1290dd-config-volume") pod "coredns-565d847f94-t6l66" (UID: "3d3ced34-e06b-4586-8c69-2f495e1290dd") : failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:02 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:02.926660    1196 request.go:601] Waited for 1.014564717s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.333612    1196 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.333652    1196 projected.go:196] Error preparing data for projected volume kube-api-access-t7nvs for pod kube-system/kube-proxy-kkmpm: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.333722    1196 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-api-access-t7nvs podName:0b228e9a-6577-46a3-b848-9c9fca602ba6 nodeName:}" failed. No retries permitted until 2022-09-06 22:14:03.833709014 +0000 UTC m=+7.146849047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t7nvs" (UniqueName: "kubernetes.io/projected/0b228e9a-6577-46a3-b848-9c9fca602ba6-kube-api-access-t7nvs") pod "kube-proxy-kkmpm" (UID: "0b228e9a-6577-46a3-b848-9c9fca602ba6") : failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.532462    1196 projected.go:290] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.532504    1196 projected.go:196] Error preparing data for projected volume kube-api-access-jmdlc for pod default/busybox-65db55d5d6-trdqs: failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:03 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:03.532566    1196 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/1cf732b5-70cb-44d1-acf9-34a0abad6541-kube-api-access-jmdlc podName:1cf732b5-70cb-44d1-acf9-34a0abad6541 nodeName:}" failed. No retries permitted until 2022-09-06 22:14:04.032553409 +0000 UTC m=+7.345693440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jmdlc" (UniqueName: "kubernetes.io/projected/1cf732b5-70cb-44d1-acf9-34a0abad6541-kube-api-access-jmdlc") pod "busybox-65db55d5d6-trdqs" (UID: "1cf732b5-70cb-44d1-acf9-34a0abad6541") : failed to sync configmap cache: timed out waiting for the condition
	Sep 06 22:14:04 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:04.187909    1196 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c07b4408a6a3cf4dce955fe4b5046540742eebbe4e02f063002e29e796b6284a"
	Sep 06 22:14:04 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:04.272000    1196 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e4319b7bffa2723d5e04ddf4dd4c6b46544d50f00da0e75a7150fe80b0841666"
	Sep 06 22:14:04 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:04.391642    1196 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="599705f4f546c55f0b49d65c5d5420365d752db7872b483ebfc4e22dc87af921"
	Sep 06 22:14:06 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:06.440662    1196 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 22:14:12 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:12.745834    1196 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 22:14:34 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:34.628800    1196 scope.go:115] "RemoveContainer" containerID="167b4a4f330648bdc94afb29313055dcf6a97832aa3d89fc9eb5744804b99c1d"
	Sep 06 22:14:34 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:34.629028    1196 scope.go:115] "RemoveContainer" containerID="32ecfafa90b955863dfcad23baa2f88914ab3444880d27a5c3b6e47414bc1060"
	Sep 06 22:14:34 multinode-20220906150606-22187 kubelet[1196]: E0906 22:14:34.629137    1196 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cf24b814-e576-465e-9c3e-f8c04c05c695)\"" pod="kube-system/storage-provisioner" podUID=cf24b814-e576-465e-9c3e-f8c04c05c695
	Sep 06 22:14:45 multinode-20220906150606-22187 kubelet[1196]: I0906 22:14:45.850606    1196 scope.go:115] "RemoveContainer" containerID="32ecfafa90b955863dfcad23baa2f88914ab3444880d27a5c3b6e47414bc1060"
	
	* 
	* ==> storage-provisioner [06df22c3a5d0] <==
	* I0906 22:14:45.968032       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:14:45.977888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:14:45.977917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:15:03.371882       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:15:03.372033       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220906150606-22187_62a5e7f8-ce15-4a01-85f4-da601f39a71b!
	I0906 22:15:03.372038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0711daa2-101b-4a50-9513-72f9a901e5c3", APIVersion:"v1", ResourceVersion:"1234", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220906150606-22187_62a5e7f8-ce15-4a01-85f4-da601f39a71b became leader
	I0906 22:15:03.472699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220906150606-22187_62a5e7f8-ce15-4a01-85f4-da601f39a71b!
	
	* 
	* ==> storage-provisioner [32ecfafa90b9] <==
	* I0906 22:14:04.391516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0906 22:14:34.371615       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-20220906150606-22187 -n multinode-20220906150606-22187
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220906150606-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220906150606-22187 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-20220906150606-22187 describe pod : exit status 1 (36.760013ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-20220906150606-22187 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (209.54s)

                                                
                                    
x
+
TestPreload (267.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220906151800-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220906151800-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m24.975057062s)

                                                
                                                
-- stdout --
	* [test-preload-20220906151800-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220906151800-22187 in cluster test-preload-20220906151800-22187
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:18:00.900229   29628 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:18:00.900415   29628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:18:00.900421   29628 out.go:309] Setting ErrFile to fd 2...
	I0906 15:18:00.900424   29628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:18:00.900524   29628 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:18:00.901031   29628 out.go:303] Setting JSON to false
	I0906 15:18:00.916697   29628 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8251,"bootTime":1662494429,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:18:00.916781   29628 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:18:00.942104   29628 out.go:177] * [test-preload-20220906151800-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:18:00.986797   29628 notify.go:193] Checking for updates...
	I0906 15:18:01.019939   29628 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:18:01.065111   29628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:18:01.108288   29628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:18:01.153185   29628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:18:01.197184   29628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:18:01.219473   29628 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:18:01.287958   29628 docker.go:137] docker version: linux-20.10.17
	I0906 15:18:01.288091   29628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:18:01.418334   29628 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:18:01.361618993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:18:01.465096   29628 out.go:177] * Using the docker driver based on user configuration
	I0906 15:18:01.487010   29628 start.go:284] selected driver: docker
	I0906 15:18:01.487034   29628 start.go:808] validating driver "docker" against <nil>
	I0906 15:18:01.487059   29628 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:18:01.490523   29628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:18:01.619197   29628 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-09-06 22:18:01.564247099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:18:01.619328   29628 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 15:18:01.619452   29628 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:18:01.641422   29628 out.go:177] * Using Docker Desktop driver with root privileges
	I0906 15:18:01.662757   29628 cni.go:95] Creating CNI manager for ""
	I0906 15:18:01.662784   29628 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:18:01.662793   29628 start_flags.go:310] config:
	{Name:test-preload-20220906151800-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220906151800-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:18:01.683729   29628 out.go:177] * Starting control plane node test-preload-20220906151800-22187 in cluster test-preload-20220906151800-22187
	I0906 15:18:01.727240   29628 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:18:01.748943   29628 out.go:177] * Pulling base image ...
	I0906 15:18:01.791152   29628 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0906 15:18:01.791116   29628 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:18:01.791427   29628 cache.go:107] acquiring lock: {Name:mk7078dbe496c905d4928b9b07d4fb130f0f8e99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.792292   29628 cache.go:107] acquiring lock: {Name:mk5c7fa2370bf5670cd0ca7be4034f8b4e5efab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.791427   29628 cache.go:107] acquiring lock: {Name:mk92475b0bee6f0f1eb4f343a13c5c98694bc63a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.792746   29628 cache.go:107] acquiring lock: {Name:mk876c08549c52aeae447a7efb802d2457e9bd99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.792769   29628 cache.go:107] acquiring lock: {Name:mkbf8b009a3d2b00f7e3be88c23af0796d39aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.792848   29628 cache.go:107] acquiring lock: {Name:mk71cf30bc567906959df749e16cc2a2af1b5994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.792895   29628 cache.go:107] acquiring lock: {Name:mk6d39ca20e026223bb0bfdea7e5db222e0e319d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.793006   29628 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 15:18:01.793476   29628 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.021697ms
	I0906 15:18:01.793500   29628 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 15:18:01.793587   29628 cache.go:107] acquiring lock: {Name:mk1a273f79ee10771446ce94d77f9e46b2f9f017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.793681   29628 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0906 15:18:01.793844   29628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/config.json ...
	I0906 15:18:01.793905   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/config.json: {Name:mke62591825e718edded9c8b15fa4487f5dfd16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:01.793929   29628 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:01.794064   29628 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:01.794071   29628 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:01.794105   29628 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0906 15:18:01.794218   29628 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:01.794224   29628 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:01.800803   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:01.801960   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:01.802334   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:01.803398   29628 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0906 15:18:01.803791   29628 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:01.804272   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:01.804819   29628 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0906 15:18:01.856989   29628 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:18:01.857013   29628 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:18:01.857029   29628 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:18:01.857074   29628 start.go:364] acquiring machines lock for test-preload-20220906151800-22187: {Name:mk5940c4af5398fda8f31bca6704098a0f1d02c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:18:01.857207   29628 start.go:368] acquired machines lock for "test-preload-20220906151800-22187" in 121.841µs
	I0906 15:18:01.857232   29628 start.go:93] Provisioning new machine with config: &{Name:test-preload-20220906151800-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220906151800-22187 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:18:01.857338   29628 start.go:125] createHost starting for "" (driver="docker")
	I0906 15:18:01.899826   29628 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 15:18:01.900077   29628 start.go:159] libmachine.API.Create for "test-preload-20220906151800-22187" (driver="docker")
	I0906 15:18:01.900106   29628 client.go:168] LocalClient.Create starting
	I0906 15:18:01.900173   29628 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem
	I0906 15:18:01.900207   29628 main.go:134] libmachine: Decoding PEM data...
	I0906 15:18:01.900222   29628 main.go:134] libmachine: Parsing certificate...
	I0906 15:18:01.900275   29628 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem
	I0906 15:18:01.900297   29628 main.go:134] libmachine: Decoding PEM data...
	I0906 15:18:01.900309   29628 main.go:134] libmachine: Parsing certificate...
	I0906 15:18:01.901749   29628 cli_runner.go:164] Run: docker network inspect test-preload-20220906151800-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 15:18:01.964376   29628 cli_runner.go:211] docker network inspect test-preload-20220906151800-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 15:18:01.964449   29628 network_create.go:272] running [docker network inspect test-preload-20220906151800-22187] to gather additional debugging logs...
	I0906 15:18:01.964469   29628 cli_runner.go:164] Run: docker network inspect test-preload-20220906151800-22187
	W0906 15:18:02.026294   29628 cli_runner.go:211] docker network inspect test-preload-20220906151800-22187 returned with exit code 1
	I0906 15:18:02.026318   29628 network_create.go:275] error running [docker network inspect test-preload-20220906151800-22187]: docker network inspect test-preload-20220906151800-22187: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220906151800-22187
	I0906 15:18:02.026343   29628 network_create.go:277] output of [docker network inspect test-preload-20220906151800-22187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220906151800-22187
	
	** /stderr **
	I0906 15:18:02.026428   29628 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 15:18:02.089046   29628 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a52068] misses:0}
	I0906 15:18:02.089095   29628 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:18:02.089122   29628 network_create.go:115] attempt to create docker network test-preload-20220906151800-22187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 15:18:02.089179   29628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 test-preload-20220906151800-22187
	W0906 15:18:02.150751   29628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 test-preload-20220906151800-22187 returned with exit code 1
	W0906 15:18:02.150789   29628 network_create.go:107] failed to create docker network test-preload-20220906151800-22187 192.168.49.0/24, will retry: subnet is taken
	I0906 15:18:02.151040   29628 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a52068] amended:false}} dirty:map[] misses:0}
	I0906 15:18:02.151059   29628 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:18:02.151253   29628 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a52068] amended:true}} dirty:map[192.168.49.0:0xc000a52068 192.168.58.0:0xc000c02140] misses:0}
	I0906 15:18:02.151268   29628 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:18:02.151290   29628 network_create.go:115] attempt to create docker network test-preload-20220906151800-22187 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0906 15:18:02.151354   29628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 test-preload-20220906151800-22187
	W0906 15:18:02.212164   29628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 test-preload-20220906151800-22187 returned with exit code 1
	W0906 15:18:02.212193   29628 network_create.go:107] failed to create docker network test-preload-20220906151800-22187 192.168.58.0/24, will retry: subnet is taken
	I0906 15:18:02.212507   29628 network.go:281] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a52068] amended:true}} dirty:map[192.168.49.0:0xc000a52068 192.168.58.0:0xc000c02140] misses:1}
	I0906 15:18:02.212525   29628 network.go:239] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:18:02.212726   29628 network.go:290] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a52068] amended:true}} dirty:map[192.168.49.0:0xc000a52068 192.168.58.0:0xc000c02140 192.168.67.0:0xc000c02188] misses:1}
	I0906 15:18:02.212738   29628 network.go:236] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:18:02.212746   29628 network_create.go:115] attempt to create docker network test-preload-20220906151800-22187 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0906 15:18:02.212801   29628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 test-preload-20220906151800-22187
	I0906 15:18:02.306383   29628 network_create.go:99] docker network test-preload-20220906151800-22187 192.168.67.0/24 created
	I0906 15:18:02.306412   29628 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220906151800-22187" container
	I0906 15:18:02.306489   29628 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 15:18:02.367745   29628 cli_runner.go:164] Run: docker volume create test-preload-20220906151800-22187 --label name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 --label created_by.minikube.sigs.k8s.io=true
	I0906 15:18:02.430434   29628 oci.go:103] Successfully created a docker volume test-preload-20220906151800-22187
	I0906 15:18:02.430524   29628 cli_runner.go:164] Run: docker run --rm --name test-preload-20220906151800-22187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 --entrypoint /usr/bin/test -v test-preload-20220906151800-22187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -d /var/lib
	I0906 15:18:02.880065   29628 oci.go:107] Successfully prepared a docker volume test-preload-20220906151800-22187
	I0906 15:18:02.880089   29628 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0906 15:18:02.880164   29628 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 15:18:03.008560   29628 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220906151800-22187 --name test-preload-20220906151800-22187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220906151800-22187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220906151800-22187 --network test-preload-20220906151800-22187 --ip 192.168.67.2 --volume test-preload-20220906151800-22187:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d
	I0906 15:18:03.377915   29628 cli_runner.go:164] Run: docker container inspect test-preload-20220906151800-22187 --format={{.State.Running}}
	I0906 15:18:03.441936   29628 cli_runner.go:164] Run: docker container inspect test-preload-20220906151800-22187 --format={{.State.Status}}
	I0906 15:18:03.510075   29628 cli_runner.go:164] Run: docker exec test-preload-20220906151800-22187 stat /var/lib/dpkg/alternatives/iptables
	I0906 15:18:03.610984   29628 oci.go:144] the created container "test-preload-20220906151800-22187" has a running status.
	I0906 15:18:03.611011   29628 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa...
	I0906 15:18:03.686078   29628 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 15:18:03.798447   29628 cli_runner.go:164] Run: docker container inspect test-preload-20220906151800-22187 --format={{.State.Status}}
	I0906 15:18:03.864139   29628 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 15:18:03.864153   29628 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220906151800-22187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 15:18:03.890137   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0906 15:18:03.976326   29628 cli_runner.go:164] Run: docker container inspect test-preload-20220906151800-22187 --format={{.State.Status}}
	I0906 15:18:04.038606   29628 machine.go:88] provisioning docker machine ...
	I0906 15:18:04.038642   29628 ubuntu.go:169] provisioning hostname "test-preload-20220906151800-22187"
	I0906 15:18:04.038717   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:04.045420   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0906 15:18:04.058803   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0906 15:18:04.069475   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0906 15:18:04.079717   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0906 15:18:04.119532   29628 main.go:134] libmachine: Using SSH client type: native
	I0906 15:18:04.119714   29628 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57411 <nil> <nil>}
	I0906 15:18:04.119725   29628 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220906151800-22187 && echo "test-preload-20220906151800-22187" | sudo tee /etc/hostname
	I0906 15:18:04.123837   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0906 15:18:04.124064   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0906 15:18:04.124080   29628 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 2.332644516s
	I0906 15:18:04.124091   29628 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0906 15:18:04.162319   29628 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0906 15:18:04.241306   29628 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220906151800-22187
	
	I0906 15:18:04.241379   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:04.304308   29628 main.go:134] libmachine: Using SSH client type: native
	I0906 15:18:04.304456   29628 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57411 <nil> <nil>}
	I0906 15:18:04.304474   29628 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220906151800-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220906151800-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220906151800-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:18:04.415682   29628 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:18:04.415702   29628 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:18:04.415721   29628 ubuntu.go:177] setting up certificates
	I0906 15:18:04.415727   29628 provision.go:83] configureAuth start
	I0906 15:18:04.415792   29628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220906151800-22187
	I0906 15:18:04.477612   29628 provision.go:138] copyHostCerts
	I0906 15:18:04.477687   29628 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:18:04.477694   29628 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:18:04.477784   29628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:18:04.477977   29628 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:18:04.477986   29628 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:18:04.478047   29628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:18:04.478185   29628 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:18:04.478190   29628 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:18:04.478242   29628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:18:04.478366   29628 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220906151800-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220906151800-22187]
	I0906 15:18:04.624826   29628 provision.go:172] copyRemoteCerts
	I0906 15:18:04.624887   29628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:18:04.624931   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:04.688758   29628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57411 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa Username:docker}
	I0906 15:18:04.771617   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:18:04.788534   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0906 15:18:04.805244   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:18:04.822328   29628 provision.go:86] duration metric: configureAuth took 406.585442ms
	I0906 15:18:04.822342   29628 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:18:04.822513   29628 config.go:180] Loaded profile config "test-preload-20220906151800-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0906 15:18:04.822572   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:04.887584   29628 main.go:134] libmachine: Using SSH client type: native
	I0906 15:18:04.887731   29628 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57411 <nil> <nil>}
	I0906 15:18:04.887742   29628 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:18:05.001152   29628 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:18:05.001167   29628 ubuntu.go:71] root file system type: overlay
	I0906 15:18:05.001343   29628 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:18:05.001418   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:05.065323   29628 main.go:134] libmachine: Using SSH client type: native
	I0906 15:18:05.065543   29628 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57411 <nil> <nil>}
	I0906 15:18:05.065594   29628 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:18:05.184954   29628 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:18:05.185036   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:05.250406   29628 main.go:134] libmachine: Using SSH client type: native
	I0906 15:18:05.250536   29628 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57411 <nil> <nil>}
	I0906 15:18:05.250549   29628 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:18:05.817862   29628 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:18:05.202510480 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 15:18:05.817882   29628 machine.go:91] provisioned docker machine in 1.779252041s
	I0906 15:18:05.817888   29628 client.go:171] LocalClient.Create took 3.917763795s
	I0906 15:18:05.817903   29628 start.go:167] duration metric: libmachine.API.Create for "test-preload-20220906151800-22187" took 3.917811582s
	I0906 15:18:05.817914   29628 start.go:300] post-start starting for "test-preload-20220906151800-22187" (driver="docker")
	I0906 15:18:05.817919   29628 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:18:05.817973   29628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:18:05.818019   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:05.882129   29628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57411 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa Username:docker}
	I0906 15:18:05.963653   29628 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:18:05.967126   29628 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:18:05.967139   29628 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:18:05.967146   29628 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:18:05.967153   29628 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:18:05.967162   29628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:18:05.967280   29628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:18:05.967420   29628 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:18:05.967568   29628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:18:05.974880   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:18:05.992047   29628 start.go:303] post-start completed in 174.123195ms
	I0906 15:18:05.992577   29628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220906151800-22187
	I0906 15:18:06.054914   29628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/config.json ...
	I0906 15:18:06.055290   29628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:18:06.055363   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:06.115346   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0906 15:18:06.115367   29628 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 4.322712278s
	I0906 15:18:06.115407   29628 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0906 15:18:06.118209   29628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57411 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa Username:docker}
	I0906 15:18:06.197576   29628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:18:06.202383   29628 start.go:128] duration metric: createHost completed in 4.345020347s
	I0906 15:18:06.202399   29628 start.go:83] releasing machines lock for "test-preload-20220906151800-22187", held for 4.345168259s
	I0906 15:18:06.202479   29628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220906151800-22187
	I0906 15:18:06.266687   29628 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:18:06.266764   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:06.330515   29628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57411 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/test-preload-20220906151800-22187/id_rsa Username:docker}
	I0906 15:18:07.949397   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0906 15:18:07.949417   29628 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 6.15697499s
	I0906 15:18:07.949428   29628 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0906 15:18:08.571890   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0906 15:18:08.571919   29628 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 6.780481545s
	I0906 15:18:08.571932   29628 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0906 15:18:08.621292   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0906 15:18:08.621308   29628 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 6.828516349s
	I0906 15:18:08.621319   29628 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0906 15:18:10.596569   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0906 15:18:10.596587   29628 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 8.803956523s
	I0906 15:18:10.596596   29628 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0906 15:18:11.012295   29628 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0906 15:18:11.012319   29628 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 9.219687634s
	I0906 15:18:11.012331   29628 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0906 15:18:11.012348   29628 cache.go:87] Successfully saved all images to host disk.
	I0906 15:18:11.012423   29628 ssh_runner.go:195] Run: systemctl --version
	I0906 15:18:11.017576   29628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:18:11.027203   29628 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:18:11.027255   29628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:18:11.036296   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:18:11.048542   29628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:18:11.115352   29628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:18:11.184144   29628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:18:11.250145   29628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:18:11.449828   29628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:18:11.486035   29628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:18:11.565193   29628 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0906 15:18:11.565303   29628 cli_runner.go:164] Run: docker exec -t test-preload-20220906151800-22187 dig +short host.docker.internal
	I0906 15:18:11.683298   29628 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:18:11.683389   29628 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:18:11.687662   29628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:18:11.697175   29628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220906151800-22187
	I0906 15:18:11.760915   29628 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0906 15:18:11.760975   29628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:18:11.790020   29628 docker.go:611] Got preloaded images: 
	I0906 15:18:11.790031   29628 docker.go:617] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0906 15:18:11.790035   29628 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 15:18:11.796535   29628 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:11.797227   29628 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:11.797824   29628 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0906 15:18:11.798074   29628 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:11.799146   29628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:18:11.799570   29628 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:11.799830   29628 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0906 15:18:11.800537   29628 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:11.805760   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:11.807031   29628 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:11.808678   29628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:18:11.808814   29628 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0906 15:18:11.808842   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:11.808848   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:11.809677   29628 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0906 15:18:11.809772   29628 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:12.421945   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:18:12.452021   29628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 15:18:12.452059   29628 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:18:12.452117   29628 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:18:12.481271   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 15:18:12.481390   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 15:18:12.485390   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0906 15:18:12.485409   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0906 15:18:12.608760   29628 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 15:18:12.608776   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0906 15:18:12.943148   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 15:18:13.609434   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0906 15:18:13.609611   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:13.613788   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:13.643754   29628 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0906 15:18:13.643789   29628 docker.go:292] Removing image: k8s.gcr.io/coredns:1.6.5
	I0906 15:18:13.643857   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0906 15:18:13.644987   29628 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0906 15:18:13.645009   29628 docker.go:292] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:13.645069   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0906 15:18:13.647575   29628 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0906 15:18:13.647604   29628 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:13.647663   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0906 15:18:13.678926   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0906 15:18:13.679052   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0906 15:18:13.680332   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0906 15:18:13.680359   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0906 15:18:13.680445   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0906 15:18:13.680451   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0906 15:18:13.682984   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0906 15:18:13.683003   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0906 15:18:13.685195   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0906 15:18:13.685206   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0906 15:18:13.685215   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0906 15:18:13.685225   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0906 15:18:13.767233   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:13.825294   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:13.831136   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0906 15:18:13.835216   29628 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:13.856512   29628 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0906 15:18:13.856541   29628 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:13.856598   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0906 15:18:13.919827   29628 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0906 15:18:13.919859   29628 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:13.919855   29628 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0906 15:18:13.919898   29628 docker.go:292] Removing image: k8s.gcr.io/pause:3.1
	I0906 15:18:13.919928   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0906 15:18:13.919915   29628 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0906 15:18:13.919957   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0906 15:18:13.919976   29628 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:13.920021   29628 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0906 15:18:13.943028   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0906 15:18:13.943171   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0906 15:18:14.001668   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0906 15:18:14.001791   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0906 15:18:14.015636   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0906 15:18:14.015639   29628 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0906 15:18:14.015695   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0906 15:18:14.015714   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0906 15:18:14.015787   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0906 15:18:14.015838   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0906 15:18:14.038547   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0906 15:18:14.038581   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0906 15:18:14.052584   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0906 15:18:14.052572   29628 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0906 15:18:14.052610   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0906 15:18:14.052613   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0906 15:18:14.155331   29628 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.1
	I0906 15:18:14.155357   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0906 15:18:14.376473   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0906 15:18:14.376503   29628 docker.go:259] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0906 15:18:14.376513   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0906 15:18:15.139961   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0906 15:18:15.918149   29628 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0906 15:18:15.921811   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0906 15:18:17.844336   29628 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (1.922501976s)
	I0906 15:18:17.844348   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0906 15:18:17.844368   29628 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0906 15:18:17.844379   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0906 15:18:18.783525   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0906 15:18:18.783565   29628 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0906 15:18:18.783583   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0906 15:18:19.713464   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0906 15:18:19.713489   29628 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0906 15:18:19.713509   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0906 15:18:20.816078   29628 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.102550265s)
	I0906 15:18:20.816091   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0906 15:18:20.816131   29628 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0906 15:18:20.816142   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0906 15:18:23.774336   29628 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (2.958168985s)
	I0906 15:18:23.774349   29628 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0906 15:18:23.774375   29628 cache_images.go:123] Successfully loaded all cached images
	I0906 15:18:23.774381   29628 cache_images.go:92] LoadImages completed in 11.984294309s
	I0906 15:18:23.774452   29628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:18:23.848125   29628 cni.go:95] Creating CNI manager for ""
	I0906 15:18:23.848139   29628 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:18:23.848161   29628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:18:23.848175   29628 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220906151800-22187 NodeName:test-preload-20220906151800-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:18:23.848289   29628 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220906151800-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:18:23.848356   29628 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220906151800-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220906151800-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:18:23.848423   29628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0906 15:18:23.856603   29628 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0906 15:18:23.856653   29628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0906 15:18:23.864530   29628 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0906 15:18:23.864535   29628 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0906 15:18:23.864547   29628 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0906 15:18:25.145520   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0906 15:18:25.150231   29628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0906 15:18:25.150260   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0906 15:18:25.266820   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0906 15:18:25.321213   29628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0906 15:18:25.321252   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0906 15:18:26.338374   29628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:18:26.347995   29628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0906 15:18:26.351699   29628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0906 15:18:26.351718   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0906 15:18:27.812396   29628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:18:27.819486   29628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0906 15:18:27.831760   29628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:18:27.844298   29628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0906 15:18:27.857595   29628 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:18:27.861495   29628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:18:27.871018   29628 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187 for IP: 192.168.67.2
	I0906 15:18:27.871124   29628 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:18:27.871178   29628 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:18:27.871214   29628 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.key
	I0906 15:18:27.871226   29628 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.crt with IP's: []
	I0906 15:18:27.912300   29628 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.crt ...
	I0906 15:18:27.912309   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.crt: {Name:mkf9a42e47158fcf87835654365553b32e98a4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:27.912584   29628 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.key ...
	I0906 15:18:27.912593   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/client.key: {Name:mk77477d2292763cb35ee6829dcabc5a42bb167c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:27.912790   29628 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key.c7fa3a9e
	I0906 15:18:27.912804   29628 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 15:18:27.973102   29628 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt.c7fa3a9e ...
	I0906 15:18:27.973116   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt.c7fa3a9e: {Name:mke70b23b49c0839db0976779d3bd137171a3784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:27.973358   29628 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key.c7fa3a9e ...
	I0906 15:18:27.973365   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key.c7fa3a9e: {Name:mkf13c83f41132e24134182e4ea4dd535429bc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:27.973545   29628 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt
	I0906 15:18:27.973714   29628 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key
	I0906 15:18:27.973865   29628 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.key
	I0906 15:18:27.973883   29628 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.crt with IP's: []
	I0906 15:18:28.008934   29628 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.crt ...
	I0906 15:18:28.008944   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.crt: {Name:mkd01e999e1343a3c3c9a199e758343f24f29143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:28.009144   29628 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.key ...
	I0906 15:18:28.009152   29628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.key: {Name:mk4ef04f9cb0ccc8a571fdc91835268e26c33a88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:18:28.009472   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:18:28.009507   29628 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:18:28.009516   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:18:28.009551   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:18:28.009578   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:18:28.009606   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:18:28.009663   29628 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:18:28.010115   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:18:28.027557   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:18:28.044413   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:18:28.061061   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/test-preload-20220906151800-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:18:28.078033   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:18:28.094545   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:18:28.111089   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:18:28.128209   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:18:28.145223   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:18:28.162272   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:18:28.178302   29628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:18:28.201251   29628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:18:28.215112   29628 ssh_runner.go:195] Run: openssl version
	I0906 15:18:28.220430   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:18:28.228130   29628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:18:28.231931   29628 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:18:28.231974   29628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:18:28.236780   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:18:28.244590   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:18:28.252259   29628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:18:28.256075   29628 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:18:28.256114   29628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:18:28.261153   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:18:28.268721   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:18:28.276229   29628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:18:28.280217   29628 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:18:28.280257   29628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:18:28.285560   29628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:18:28.293123   29628 kubeadm.go:396] StartCluster: {Name:test-preload-20220906151800-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220906151800-22187 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:18:28.293212   29628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:18:28.321093   29628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:18:28.328792   29628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:18:28.336375   29628 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:18:28.336420   29628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:18:28.343534   29628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:18:28.343565   29628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:18:28.386889   29628 kubeadm.go:317] [init] Using Kubernetes version: v1.17.0
	I0906 15:18:28.386941   29628 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:18:28.679262   29628 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:18:28.679370   29628 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:18:28.679446   29628 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:18:28.950786   29628 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:18:28.951333   29628 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:18:28.951370   29628 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:18:29.021520   29628 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:18:29.054099   29628 out.go:204]   - Generating certificates and keys ...
	I0906 15:18:29.054170   29628 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:18:29.054225   29628 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:18:29.081819   29628 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 15:18:29.348687   29628 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0906 15:18:29.462242   29628 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0906 15:18:29.555830   29628 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0906 15:18:29.626705   29628 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0906 15:18:29.626818   29628 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:18:29.990876   29628 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0906 15:18:29.990985   29628 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:18:30.120349   29628 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 15:18:30.587065   29628 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 15:18:30.643538   29628 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0906 15:18:30.643686   29628 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:18:30.864409   29628 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:18:31.069196   29628 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:18:31.323150   29628 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:18:31.547815   29628 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:18:31.548259   29628 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:18:31.569805   29628 out.go:204]   - Booting up control plane ...
	I0906 15:18:31.570012   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:18:31.570091   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:18:31.570171   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:18:31.570258   29628 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:18:31.570457   29628 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:19:11.530270   29628 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:19:11.530636   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:19:11.530811   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:19:16.528625   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:19:16.528796   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:19:26.523384   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:19:26.523555   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:19:46.509876   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:19:46.510096   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:20:26.481999   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:20:26.482157   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:20:26.482166   29628 kubeadm.go:317] 
	I0906 15:20:26.482225   29628 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:20:26.482257   29628 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:20:26.482262   29628 kubeadm.go:317] 
	I0906 15:20:26.482285   29628 kubeadm.go:317] This error is likely caused by:
	I0906 15:20:26.482312   29628 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:20:26.482389   29628 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:20:26.482398   29628 kubeadm.go:317] 
	I0906 15:20:26.482482   29628 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:20:26.482509   29628 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:20:26.482534   29628 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:20:26.482537   29628 kubeadm.go:317] 
	I0906 15:20:26.482620   29628 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:20:26.482713   29628 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:20:26.482778   29628 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:20:26.482815   29628 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:20:26.482886   29628 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:20:26.482919   29628 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:20:26.484984   29628 kubeadm.go:317] W0906 22:18:28.401439    1576 validation.go:28] Cannot validate kube-proxy config - no validator is available
	I0906 15:20:26.485069   29628 kubeadm.go:317] W0906 22:18:28.401492    1576 validation.go:28] Cannot validate kubelet config - no validator is available
	I0906 15:20:26.485162   29628 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:20:26.485276   29628 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
	I0906 15:20:26.485372   29628 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:20:26.485470   29628 kubeadm.go:317] W0906 22:18:31.571317    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 15:20:26.485577   29628 kubeadm.go:317] W0906 22:18:31.572112    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 15:20:26.485643   29628 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:20:26.485701   29628 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:20:26.485844   29628 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:18:28.401439    1576 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:18:28.401492    1576 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:18:31.571317    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:18:31.572112    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220906151800-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:18:28.401439    1576 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:18:28.401492    1576 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:18:31.571317    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:18:31.572112    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:20:26.485871   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:20:26.905255   29628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:20:26.914818   29628 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:20:26.914869   29628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:20:26.922256   29628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:20:26.922280   29628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:20:26.966745   29628 kubeadm.go:317] [init] Using Kubernetes version: v1.17.0
	I0906 15:20:26.966947   29628 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:20:27.266112   29628 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:20:27.266195   29628 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:20:27.266272   29628 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:20:27.537096   29628 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:20:27.537806   29628 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:20:27.537845   29628 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:20:27.614685   29628 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:20:27.636373   29628 out.go:204]   - Generating certificates and keys ...
	I0906 15:20:27.636445   29628 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:20:27.636516   29628 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:20:27.636604   29628 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:20:27.636666   29628 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:20:27.636734   29628 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:20:27.636785   29628 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:20:27.636883   29628 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:20:27.636939   29628 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:20:27.637012   29628 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:20:27.637107   29628 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:20:27.637162   29628 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:20:27.637263   29628 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:20:27.768157   29628 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:20:27.964635   29628 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:20:28.132672   29628 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:20:28.283425   29628 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:20:28.283996   29628 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:20:28.305495   29628 out.go:204]   - Booting up control plane ...
	I0906 15:20:28.305605   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:20:28.305697   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:20:28.305780   29628 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:20:28.305877   29628 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:20:28.306055   29628 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:21:08.266557   29628 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:21:08.267589   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:21:08.267808   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:21:13.264474   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:21:13.264654   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:21:23.258507   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:21:23.258696   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:21:43.244520   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:21:43.244666   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:22:23.218115   29628 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:22:23.218336   29628 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:22:23.218350   29628 kubeadm.go:317] 
	I0906 15:22:23.218385   29628 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:22:23.218440   29628 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:22:23.218449   29628 kubeadm.go:317] 
	I0906 15:22:23.218498   29628 kubeadm.go:317] This error is likely caused by:
	I0906 15:22:23.218554   29628 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:22:23.218691   29628 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:22:23.218704   29628 kubeadm.go:317] 
	I0906 15:22:23.218826   29628 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:22:23.218871   29628 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:22:23.218904   29628 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:22:23.218909   29628 kubeadm.go:317] 
	I0906 15:22:23.219036   29628 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:22:23.219137   29628 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:22:23.219232   29628 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:22:23.219290   29628 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:22:23.219354   29628 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:22:23.219382   29628 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:22:23.221193   29628 kubeadm.go:317] W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	I0906 15:22:23.221273   29628 kubeadm.go:317] W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
	I0906 15:22:23.221332   29628 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:22:23.221441   29628 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
	I0906 15:22:23.221535   29628 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:22:23.221643   29628 kubeadm.go:317] W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 15:22:23.221746   29628 kubeadm.go:317] W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 15:22:23.221804   29628 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:22:23.221855   29628 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:22:23.221892   29628 kubeadm.go:398] StartCluster complete in 3m54.927942265s
	I0906 15:22:23.221963   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:22:23.250830   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.250842   29628 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:22:23.250896   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:22:23.280432   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.280444   29628 logs.go:276] No container was found matching "etcd"
	I0906 15:22:23.280500   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:22:23.309816   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.309828   29628 logs.go:276] No container was found matching "coredns"
	I0906 15:22:23.309883   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:22:23.338436   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.338447   29628 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:22:23.338503   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:22:23.367747   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.367758   29628 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:22:23.367812   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:22:23.396426   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.396440   29628 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:22:23.396511   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:22:23.425562   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.425574   29628 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:22:23.425631   29628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:22:23.455028   29628 logs.go:274] 0 containers: []
	W0906 15:22:23.455040   29628 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:22:23.455048   29628 logs.go:123] Gathering logs for container status ...
	I0906 15:22:23.455055   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:22:25.512913   29628 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057837108s)
	I0906 15:22:25.513050   29628 logs.go:123] Gathering logs for kubelet ...
	I0906 15:22:25.513059   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:22:25.551322   29628 logs.go:123] Gathering logs for dmesg ...
	I0906 15:22:25.551336   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:22:25.564661   29628 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:22:25.564674   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:22:25.616166   29628 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:22:25.616177   29628 logs.go:123] Gathering logs for Docker ...
	I0906 15:22:25.616183   29628 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0906 15:22:25.631129   29628 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:22:25.631147   29628 out.go:239] * 
	* 
	W0906 15:22:25.631269   29628 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:22:25.631282   29628 out.go:239] * 
	* 
	W0906 15:22:25.631823   29628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:22:25.696456   29628 out.go:177] 
	W0906 15:22:25.738969   29628 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0906 22:20:26.980304    3842 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0906 22:20:26.980480    3842 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0906 22:20:28.303746    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0906 22:20:28.304784    3842 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:22:25.739076   29628 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:22:25.739137   29628 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:22:25.760466   29628 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220906151800-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:522: *** TestPreload FAILED at 2022-09-06 15:22:25.867606 -0700 PDT m=+2311.194047494
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220906151800-22187
helpers_test.go:235: (dbg) docker inspect test-preload-20220906151800-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78",
	        "Created": "2022-09-06T22:18:03.08814787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 111045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:18:03.388834803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78/hostname",
	        "HostsPath": "/var/lib/docker/containers/09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78/hosts",
	        "LogPath": "/var/lib/docker/containers/09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78/09607d2d3352610186ebd86fdfa36b0691f340f2f4d6f56c14d2015a2153ed78-json.log",
	        "Name": "/test-preload-20220906151800-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220906151800-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220906151800-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85e69a87cd870756c5d11f8302a13b5a3c8f2ab8da122b2fa8df867104d0b2fa-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85e69a87cd870756c5d11f8302a13b5a3c8f2ab8da122b2fa8df867104d0b2fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85e69a87cd870756c5d11f8302a13b5a3c8f2ab8da122b2fa8df867104d0b2fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85e69a87cd870756c5d11f8302a13b5a3c8f2ab8da122b2fa8df867104d0b2fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220906151800-22187",
	                "Source": "/var/lib/docker/volumes/test-preload-20220906151800-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220906151800-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220906151800-22187",
	                "name.minikube.sigs.k8s.io": "test-preload-20220906151800-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eaabd03384fd45bd2b05e93b28ef3e5e74e2af1c7352d633cd54ca145b625c75",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57411"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57409"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57410"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eaabd03384fd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220906151800-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09607d2d3352",
	                        "test-preload-20220906151800-22187"
	                    ],
	                    "NetworkID": "6a4ded780fc2e35812c6dd8252521703bf5622866495d7aeecfbba32014999a9",
	                    "EndpointID": "21b5ccc238a5ddf420866bb351be85f2ac31136c9e7eb553fcc998843044caf1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220906151800-22187 -n test-preload-20220906151800-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220906151800-22187 -n test-preload-20220906151800-22187: exit status 6 (412.456355ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:22:26.335883   30026 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220906151800-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220906151800-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220906151800-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220906151800-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220906151800-22187: (2.484477125s)
--- FAIL: TestPreload (267.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker 
E0906 15:27:41.263171   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:27:47.079673   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker : exit status 70 (34.605047315s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220906152727-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig2150292764
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:27:43.932287348 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220906152727-22187" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:28:00.249285888 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220906152727-22187", then "minikube start -p running-upgrade-20220906152727-22187 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 6.12 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 113.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 263.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 475.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 491.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 528.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:28:00.249285888 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker : exit status 70 (4.456584126s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220906152727-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3336642954
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220906152727-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2060341136.exe start -p running-upgrade-20220906152727-22187 --memory=2200 --vm-driver=docker : exit status 70 (4.312665243s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220906152727-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3009955713
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220906152727-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-09-06 15:28:12.753211 -0700 PDT m=+2658.070193394
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220906152727-22187
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220906152727-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf",
	        "Created": "2022-09-06T22:27:52.097439767Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137827,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:27:52.306752148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf/hosts",
	        "LogPath": "/var/lib/docker/containers/768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf/768c0b4290f218f3ab930a83880390795996cd968e1844380fc9af9973464bcf-json.log",
	        "Name": "/running-upgrade-20220906152727-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220906152727-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1aa5c53fcca90a1309e4a2a71a5436bb36873dac4d257121135f34918606306b-init/diff:/var/lib/docker/overlay2/4bcc8a2ebeec26fe77f5d52b8018a59d3e0a92757805287878e19b9524121dee/diff:/var/lib/docker/overlay2/36fc4a9399fbe3e3cee20c3c0bce2585043206983f214d5b89aa3269114bcbb2/diff:/var/lib/docker/overlay2/1e6255bdc9f01561a9772c464c1856682eab454eb4d93e0d98ef6338cfaaa3a3/diff:/var/lib/docker/overlay2/8205e05e02c7f1a01bb3162924c7c6851005b531f0ffa211af7ef2e636460df0/diff:/var/lib/docker/overlay2/51f1f9eb703b74b9d9197352989b984a3ed815f7c5960a2ecc84b3daad7daaca/diff:/var/lib/docker/overlay2/5ae9570f6dc344cdc352bfff39d75a4ae859199a98f372cdaa0502abf2e91e57/diff:/var/lib/docker/overlay2/fb92a82c3845b0e174133c26b284b3b3d3f6d016a68c6e4a8ca1017c777139ea/diff:/var/lib/docker/overlay2/28933cc22ec5056aec1614407fa7ccd844df051593e89525d4ff2e26944a5124/diff:/var/lib/docker/overlay2/d6c19ba19b6849bcd8b4bdaa37afa35f943eb4c4d1a2eb005c12e85b6f7de1ab/diff:/var/lib/docker/overlay2/d4b37e
003d7fa7d2e8a725c8e2e08e3701a91ffb7820f794af53c6012bee469f/diff:/var/lib/docker/overlay2/d67996e354c6052b529c519812512a911b818ee71efdfec8a38c2b7e2361b81c/diff:/var/lib/docker/overlay2/2bb4569be621a7154609a53d42d51043608a28b03cc74ba20bf14888bd2e26c4/diff:/var/lib/docker/overlay2/edb207de691d238ce84a95da5349cbdbb80b61a9575d95ade1d1416ceca92132/diff:/var/lib/docker/overlay2/50ba72a2463f1bd6435b48ef39abab87510d651e3519178b3428deddfe57eb7d/diff:/var/lib/docker/overlay2/be91accf7c79ac6368ad99ffba50aa047ede964894d31ac264317b1ae48c8a76/diff:/var/lib/docker/overlay2/5e1130fd476ebe81932fca2a567dc93e3aeaccf909f5a67741befb221e2ac990/diff:/var/lib/docker/overlay2/e605dbaa58f001b49c3fc79fdb124f069f666ad53cb61d92bfde06324430abe0/diff:/var/lib/docker/overlay2/b9177eb3db6cf8bcb4e76369bdb53732e41adcaaf31eb4c57f5acff05d9270fd/diff:/var/lib/docker/overlay2/38368a8a478a6c2c3ef0c46e0b5bd86883498da1beaf63d4d5e442f5d8bc067b/diff:/var/lib/docker/overlay2/a930e60ec726200d88722ae495a6b290141625d4dce1d61693fab2f6bcab042f/diff:/var/lib/d
ocker/overlay2/27aa1a22b914fcffdb24b1ad2ede608837762e9b22d4c8067256c39993583f6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1aa5c53fcca90a1309e4a2a71a5436bb36873dac4d257121135f34918606306b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1aa5c53fcca90a1309e4a2a71a5436bb36873dac4d257121135f34918606306b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1aa5c53fcca90a1309e4a2a71a5436bb36873dac4d257121135f34918606306b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220906152727-22187",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220906152727-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220906152727-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220906152727-22187",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220906152727-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8ad00258ee5cf735ae2d18891f41a7fac39860d225263c81d4e8700560d2ba1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57864"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57866"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8ad00258ee5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "06f8cd35c93a3b8103baae883ddd369e0140f392e03b530b38e650b4cf71a4d8",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c81be874dd90e6337c4ce784b847206141dac04ab69ca89f7bdf5bce8ad92d19",
	                    "EndpointID": "06f8cd35c93a3b8103baae883ddd369e0140f392e03b530b38e650b4cf71a4d8",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220906152727-22187 -n running-upgrade-20220906152727-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220906152727-22187 -n running-upgrade-20220906152727-22187: exit status 6 (402.477354ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:28:13.212128   32019 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220906152727-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220906152727-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220906152727-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220906152727-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220906152727-22187: (2.418388345s)
--- FAIL: TestRunningBinaryUpgrade (48.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m16.220979926s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220906152610-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220906152610-22187 in cluster kubernetes-upgrade-20220906152610-22187
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:26:10.488568   31107 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:26:10.488703   31107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:26:10.488708   31107 out.go:309] Setting ErrFile to fd 2...
	I0906 15:26:10.488712   31107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:26:10.488810   31107 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:26:10.489348   31107 out.go:303] Setting JSON to false
	I0906 15:26:10.504185   31107 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8741,"bootTime":1662494429,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:26:10.504318   31107 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:26:10.526231   31107 out.go:177] * [kubernetes-upgrade-20220906152610-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:26:10.569306   31107 notify.go:193] Checking for updates...
	I0906 15:26:10.569451   31107 preload.go:306] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I0906 15:26:10.590786   31107 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:26:10.611631   31107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:26:10.633016   31107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:26:10.655187   31107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:26:10.676989   31107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:26:10.699698   31107 config.go:180] Loaded profile config "missing-upgrade-20220906152523-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0906 15:26:10.699786   31107 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:26:10.768552   31107 docker.go:137] docker version: linux-20.10.17
	I0906 15:26:10.768684   31107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:26:10.897182   31107 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:49 SystemTime:2022-09-06 22:26:10.826936349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:26:10.918239   31107 out.go:177] * Using the docker driver based on user configuration
	I0906 15:26:10.939070   31107 start.go:284] selected driver: docker
	I0906 15:26:10.939093   31107 start.go:808] validating driver "docker" against <nil>
	I0906 15:26:10.939117   31107 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:26:10.942466   31107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:26:11.071985   31107 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:49 SystemTime:2022-09-06 22:26:11.002635253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:26:11.072099   31107 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 15:26:11.072248   31107 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 15:26:11.094266   31107 out.go:177] * Using Docker Desktop driver with root privileges
	I0906 15:26:11.115935   31107 cni.go:95] Creating CNI manager for ""
	I0906 15:26:11.115969   31107 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:26:11.115980   31107 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220906152610-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220906152610-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:26:11.137740   31107 out.go:177] * Starting control plane node kubernetes-upgrade-20220906152610-22187 in cluster kubernetes-upgrade-20220906152610-22187
	I0906 15:26:11.179904   31107 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:26:11.201642   31107 out.go:177] * Pulling base image ...
	I0906 15:26:11.244665   31107 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:26:11.244684   31107 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:26:11.307328   31107 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:26:11.307350   31107 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:26:11.318943   31107 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:26:11.318956   31107 cache.go:57] Caching tarball of preloaded images
	I0906 15:26:11.319177   31107 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:26:11.363210   31107 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0906 15:26:11.385162   31107 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 15:26:11.483635   31107 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:26:16.464754   31107 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 15:26:16.464899   31107 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 15:26:17.015236   31107 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 15:26:17.015310   31107 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/config.json ...
	I0906 15:26:17.015334   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/config.json: {Name:mk207ccf381c82538aa7a110aa6e17ecb5971793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:17.015586   31107 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:26:17.015618   31107 start.go:364] acquiring machines lock for kubernetes-upgrade-20220906152610-22187: {Name:mk2107b662aa1b89e1637a76efd4b8fae6fd1bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:26:17.015711   31107 start.go:368] acquired machines lock for "kubernetes-upgrade-20220906152610-22187" in 85.752µs
	I0906 15:26:17.015735   31107 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220906152610-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022090615261
0-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:26:17.015779   31107 start.go:125] createHost starting for "" (driver="docker")
	I0906 15:26:17.065170   31107 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 15:26:17.065488   31107 start.go:159] libmachine.API.Create for "kubernetes-upgrade-20220906152610-22187" (driver="docker")
	I0906 15:26:17.065531   31107 client.go:168] LocalClient.Create starting
	I0906 15:26:17.065657   31107 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem
	I0906 15:26:17.065719   31107 main.go:134] libmachine: Decoding PEM data...
	I0906 15:26:17.065744   31107 main.go:134] libmachine: Parsing certificate...
	I0906 15:26:17.065846   31107 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem
	I0906 15:26:17.065893   31107 main.go:134] libmachine: Decoding PEM data...
	I0906 15:26:17.065908   31107 main.go:134] libmachine: Parsing certificate...
	I0906 15:26:17.066518   31107 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220906152610-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 15:26:17.129574   31107 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220906152610-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 15:26:17.129684   31107 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220906152610-22187] to gather additional debugging logs...
	I0906 15:26:17.129706   31107 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220906152610-22187
	W0906 15:26:17.191177   31107 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220906152610-22187 returned with exit code 1
	I0906 15:26:17.191209   31107 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220906152610-22187]: docker network inspect kubernetes-upgrade-20220906152610-22187: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220906152610-22187
	I0906 15:26:17.191229   31107 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220906152610-22187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220906152610-22187
	
	** /stderr **
	I0906 15:26:17.191311   31107 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 15:26:17.253957   31107 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003be218] misses:0}
	I0906 15:26:17.254010   31107 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:26:17.254029   31107 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220906152610-22187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 15:26:17.254118   31107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 kubernetes-upgrade-20220906152610-22187
	W0906 15:26:17.315260   31107 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 kubernetes-upgrade-20220906152610-22187 returned with exit code 1
	W0906 15:26:17.315303   31107 network_create.go:107] failed to create docker network kubernetes-upgrade-20220906152610-22187 192.168.49.0/24, will retry: subnet is taken
	I0906 15:26:17.315609   31107 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be218] amended:false}} dirty:map[] misses:0}
	I0906 15:26:17.315627   31107 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:26:17.315822   31107 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be218] amended:true}} dirty:map[192.168.49.0:0xc0003be218 192.168.58.0:0xc0000150d0] misses:0}
	I0906 15:26:17.315835   31107 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:26:17.315846   31107 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220906152610-22187 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0906 15:26:17.315912   31107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 kubernetes-upgrade-20220906152610-22187
	W0906 15:26:17.377118   31107 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 kubernetes-upgrade-20220906152610-22187 returned with exit code 1
	W0906 15:26:17.377165   31107 network_create.go:107] failed to create docker network kubernetes-upgrade-20220906152610-22187 192.168.58.0/24, will retry: subnet is taken
	I0906 15:26:17.377430   31107 network.go:281] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be218] amended:true}} dirty:map[192.168.49.0:0xc0003be218 192.168.58.0:0xc0000150d0] misses:1}
	I0906 15:26:17.377454   31107 network.go:239] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:26:17.377652   31107 network.go:290] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be218] amended:true}} dirty:map[192.168.49.0:0xc0003be218 192.168.58.0:0xc0000150d0 192.168.67.0:0xc0003be260] misses:1}
	I0906 15:26:17.377663   31107 network.go:236] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:26:17.377672   31107 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220906152610-22187 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0906 15:26:17.377735   31107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 kubernetes-upgrade-20220906152610-22187
	I0906 15:26:17.471280   31107 network_create.go:99] docker network kubernetes-upgrade-20220906152610-22187 192.168.67.0/24 created
	I0906 15:26:17.471316   31107 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-20220906152610-22187" container
	I0906 15:26:17.471414   31107 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 15:26:17.534278   31107 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220906152610-22187 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 --label created_by.minikube.sigs.k8s.io=true
	I0906 15:26:17.596101   31107 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220906152610-22187
	I0906 15:26:17.596246   31107 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220906152610-22187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220906152610-22187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -d /var/lib
	I0906 15:26:18.056993   31107 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220906152610-22187
	I0906 15:26:18.057035   31107 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:26:18.057047   31107 kic.go:179] Starting extracting preloaded images to volume ...
	I0906 15:26:18.057148   31107 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220906152610-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 15:26:23.083546   31107 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220906152610-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir: (5.026291143s)
	I0906 15:26:23.083569   31107 kic.go:188] duration metric: took 5.026501 seconds to extract preloaded images to volume
	I0906 15:26:23.083738   31107 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 15:26:23.219388   31107 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220906152610-22187 --name kubernetes-upgrade-20220906152610-22187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220906152610-22187 --network kubernetes-upgrade-20220906152610-22187 --ip 192.168.67.2 --volume kubernetes-upgrade-20220906152610-22187:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d
	I0906 15:26:23.612316   31107 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Running}}
	I0906 15:26:23.681447   31107 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:26:23.754844   31107 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220906152610-22187 stat /var/lib/dpkg/alternatives/iptables
	I0906 15:26:23.887827   31107 oci.go:144] the created container "kubernetes-upgrade-20220906152610-22187" has a running status.
	I0906 15:26:23.887866   31107 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa...
	I0906 15:26:24.043151   31107 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 15:26:24.203081   31107 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:26:24.268198   31107 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 15:26:24.268216   31107 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220906152610-22187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 15:26:24.380271   31107 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:26:24.446604   31107 machine.go:88] provisioning docker machine ...
	I0906 15:26:24.449555   31107 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220906152610-22187"
	I0906 15:26:24.449673   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:24.512857   31107 main.go:134] libmachine: Using SSH client type: native
	I0906 15:26:24.516080   31107 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57740 <nil> <nil>}
	I0906 15:26:24.516095   31107 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220906152610-22187 && echo "kubernetes-upgrade-20220906152610-22187" | sudo tee /etc/hostname
	I0906 15:26:24.638399   31107 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220906152610-22187
	
	I0906 15:26:24.638484   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:24.702123   31107 main.go:134] libmachine: Using SSH client type: native
	I0906 15:26:24.705199   31107 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57740 <nil> <nil>}
	I0906 15:26:24.705214   31107 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220906152610-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220906152610-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220906152610-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:26:24.817754   31107 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:26:24.817776   31107 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:26:24.817797   31107 ubuntu.go:177] setting up certificates
	I0906 15:26:24.817805   31107 provision.go:83] configureAuth start
	I0906 15:26:24.817877   31107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:24.881248   31107 provision.go:138] copyHostCerts
	I0906 15:26:24.884586   31107 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:26:24.884595   31107 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:26:24.884694   31107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:26:24.884871   31107 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:26:24.884889   31107 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:26:24.884954   31107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:26:24.885112   31107 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:26:24.885120   31107 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:26:24.885175   31107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:26:24.885282   31107 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220906152610-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220906152610-22187]
	I0906 15:26:24.949956   31107 provision.go:172] copyRemoteCerts
	I0906 15:26:24.950017   31107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:26:24.950062   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:25.012527   31107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57740 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:26:25.096200   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:26:25.112389   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0906 15:26:25.128926   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:26:25.145515   31107 provision.go:86] duration metric: configureAuth took 327.693774ms
	I0906 15:26:25.145538   31107 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:26:25.145692   31107 config.go:180] Loaded profile config "kubernetes-upgrade-20220906152610-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:26:25.145750   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:25.209427   31107 main.go:134] libmachine: Using SSH client type: native
	I0906 15:26:25.209679   31107 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57740 <nil> <nil>}
	I0906 15:26:25.209695   31107 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:26:25.322897   31107 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:26:25.322910   31107 ubuntu.go:71] root file system type: overlay
	I0906 15:26:25.323075   31107 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:26:25.323151   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:25.388001   31107 main.go:134] libmachine: Using SSH client type: native
	I0906 15:26:25.388354   31107 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57740 <nil> <nil>}
	I0906 15:26:25.388404   31107 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:26:25.508599   31107 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:26:25.513370   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:25.577676   31107 main.go:134] libmachine: Using SSH client type: native
	I0906 15:26:25.577823   31107 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57740 <nil> <nil>}
	I0906 15:26:25.577836   31107 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:26:26.149989   31107 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:26:25.519318317 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 15:26:26.150008   31107 machine.go:91] provisioned docker machine in 1.700478178s
	I0906 15:26:26.150014   31107 client.go:171] LocalClient.Create took 9.084445629s
	I0906 15:26:26.150031   31107 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220906152610-22187" took 9.084513103s
	I0906 15:26:26.150042   31107 start.go:300] post-start starting for "kubernetes-upgrade-20220906152610-22187" (driver="docker")
	I0906 15:26:26.150048   31107 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:26:26.150104   31107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:26:26.150150   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.215262   31107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57740 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:26:26.297143   31107 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:26:26.300655   31107 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:26:26.300670   31107 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:26:26.300677   31107 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:26:26.300682   31107 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:26:26.300695   31107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:26:26.300808   31107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:26:26.300950   31107 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:26:26.301104   31107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:26:26.307625   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:26:26.324224   31107 start.go:303] post-start completed in 174.171632ms
	I0906 15:26:26.324739   31107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.388097   31107 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/config.json ...
	I0906 15:26:26.388507   31107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:26:26.388854   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.451725   31107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57740 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:26:26.530376   31107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:26:26.534563   31107 start.go:128] duration metric: createHost completed in 9.518733611s
	I0906 15:26:26.534580   31107 start.go:83] releasing machines lock for "kubernetes-upgrade-20220906152610-22187", held for 9.518827999s
	I0906 15:26:26.534658   31107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.598859   31107 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:26:26.598864   31107 ssh_runner.go:195] Run: systemctl --version
	I0906 15:26:26.598927   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.598945   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:26.667295   31107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57740 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:26:26.668262   31107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57740 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:26:26.904811   31107 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:26:26.914757   31107 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:26:26.914809   31107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:26:26.923980   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:26:26.938220   31107 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:26:27.004543   31107 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:26:27.075760   31107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:26:27.148274   31107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:26:27.453066   31107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:26:27.488721   31107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:26:27.569650   31107 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0906 15:26:27.569732   31107 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220906152610-22187 dig +short host.docker.internal
	I0906 15:26:27.696749   31107 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:26:27.696982   31107 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:26:27.701001   31107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:26:27.710464   31107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:26:27.775064   31107 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:26:27.775155   31107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:26:27.805188   31107 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:26:27.805213   31107 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:26:27.805277   31107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:26:27.838743   31107 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:26:27.838769   31107 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:26:27.838857   31107 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:26:27.914930   31107 cni.go:95] Creating CNI manager for ""
	I0906 15:26:27.914943   31107 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:26:27.914956   31107 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:26:27.914968   31107 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220906152610-22187 NodeName:kubernetes-upgrade-20220906152610-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:26:27.915074   31107 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220906152610-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220906152610-22187
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:26:27.915151   31107 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220906152610-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220906152610-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:26:27.915201   31107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0906 15:26:27.924647   31107 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:26:27.924707   31107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:26:27.931994   31107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0906 15:26:27.945191   31107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:26:27.957641   31107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0906 15:26:27.970309   31107 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:26:27.974084   31107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:26:27.983217   31107 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187 for IP: 192.168.67.2
	I0906 15:26:27.983321   31107 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:26:27.983370   31107 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:26:27.983410   31107 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.key
	I0906 15:26:27.983422   31107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt with IP's: []
	I0906 15:26:28.116045   31107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt ...
	I0906 15:26:28.116060   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt: {Name:mk03d2832af7e18aa5ca4c71235bd48808a7e231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.116321   31107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.key ...
	I0906 15:26:28.116329   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.key: {Name:mk49349522772a5469f35c7b7106a84c4930065f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.116517   31107 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key.c7fa3a9e
	I0906 15:26:28.116532   31107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 15:26:28.222062   31107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt.c7fa3a9e ...
	I0906 15:26:28.222076   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt.c7fa3a9e: {Name:mk17925f456bb0e4d3e348fa932bec185e0f361c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.222330   31107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key.c7fa3a9e ...
	I0906 15:26:28.222338   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key.c7fa3a9e: {Name:mk2c35b4866eed025e3c3559cc3ae9f21a234d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.222525   31107 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt
	I0906 15:26:28.222738   31107 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key
	I0906 15:26:28.222907   31107 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.key
	I0906 15:26:28.222925   31107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.crt with IP's: []
	I0906 15:26:28.317319   31107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.crt ...
	I0906 15:26:28.317336   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.crt: {Name:mkce4b3002d82ee503cafd91e65a9fec3b4bc209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.317675   31107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.key ...
	I0906 15:26:28.317684   31107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.key: {Name:mk97e73c7b4d39d76f9ac0150b2f2a00bd465b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:26:28.318144   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:26:28.318192   31107 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:26:28.318208   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:26:28.318283   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:26:28.318318   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:26:28.318355   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:26:28.318437   31107 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:26:28.319096   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:26:28.338503   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 15:26:28.355930   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:26:28.372752   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:26:28.389204   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:26:28.406776   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:26:28.424556   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:26:28.442365   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:26:28.460179   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:26:28.478503   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:26:28.495443   31107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:26:28.515232   31107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:26:28.528243   31107 ssh_runner.go:195] Run: openssl version
	I0906 15:26:28.533421   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:26:28.542639   31107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:26:28.546524   31107 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:26:28.546582   31107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:26:28.552954   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:26:28.561120   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:26:28.568596   31107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:26:28.572853   31107 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:26:28.572900   31107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:26:28.578354   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:26:28.586024   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:26:28.593579   31107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:26:28.597246   31107 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:26:28.597298   31107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:26:28.602350   31107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:26:28.609742   31107 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-20220906152610-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220906152610-22187 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0906 15:26:28.609838   31107 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:26:28.638824   31107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:26:28.646409   31107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:26:28.653830   31107 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:26:28.653875   31107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:26:28.661186   31107 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:26:28.661219   31107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:26:28.705706   31107 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:26:28.705763   31107 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:26:29.020494   31107 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:26:29.020604   31107 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:26:29.020702   31107 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:26:29.333610   31107 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:26:29.334171   31107 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:26:29.341010   31107 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:26:29.404944   31107 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:26:29.446975   31107 out.go:204]   - Generating certificates and keys ...
	I0906 15:26:29.447081   31107 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:26:29.447173   31107 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:26:29.656525   31107 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 15:26:29.952657   31107 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0906 15:26:30.033011   31107 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0906 15:26:30.177023   31107 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0906 15:26:30.243982   31107 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0906 15:26:30.244099   31107 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:26:30.358787   31107 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0906 15:26:30.358961   31107 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:26:30.557433   31107 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 15:26:30.648264   31107 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 15:26:30.761757   31107 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0906 15:26:30.761810   31107 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:26:31.216853   31107 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:26:31.378027   31107 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:26:31.640065   31107 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:26:32.088996   31107 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:26:32.089790   31107 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:26:32.111312   31107 out.go:204]   - Booting up control plane ...
	I0906 15:26:32.111413   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:26:32.111499   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:26:32.111582   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:26:32.111648   31107 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:26:32.111766   31107 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:27:12.070231   31107 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:27:12.070919   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:27:12.071063   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:27:17.068280   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:27:17.068441   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:27:27.062093   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:27:27.062348   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:27:47.054249   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:27:47.054438   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:28:27.029960   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:28:27.030136   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:28:27.030161   31107 kubeadm.go:317] 
	I0906 15:28:27.030226   31107 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:28:27.030257   31107 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:28:27.030265   31107 kubeadm.go:317] 
	I0906 15:28:27.030298   31107 kubeadm.go:317] This error is likely caused by:
	I0906 15:28:27.030320   31107 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:28:27.030391   31107 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:28:27.030397   31107 kubeadm.go:317] 
	I0906 15:28:27.030476   31107 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:28:27.030509   31107 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:28:27.030534   31107 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:28:27.030537   31107 kubeadm.go:317] 
	I0906 15:28:27.030643   31107 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:28:27.030723   31107 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:28:27.030791   31107 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:28:27.030831   31107 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:28:27.030889   31107 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:28:27.030920   31107 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:28:27.033897   31107 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:28:27.034010   31107 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:28:27.034089   31107 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:28:27.034147   31107 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:28:27.034204   31107 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:28:27.034376   31107 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220906152610-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:28:27.034406   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:28:27.463740   31107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:28:27.474032   31107 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:28:27.474089   31107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:28:27.482306   31107 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:28:27.482324   31107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:28:27.533345   31107 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:28:27.533404   31107 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:28:27.860665   31107 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:28:27.860760   31107 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:28:27.860862   31107 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:28:28.161781   31107 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:28:28.163560   31107 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:28:28.170528   31107 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:28:28.238739   31107 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:28:28.281102   31107 out.go:204]   - Generating certificates and keys ...
	I0906 15:28:28.281196   31107 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:28:28.281284   31107 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:28:28.281371   31107 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:28:28.281457   31107 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:28:28.281521   31107 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:28:28.281600   31107 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:28:28.281687   31107 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:28:28.281786   31107 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:28:28.281895   31107 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:28:28.281982   31107 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:28:28.282028   31107 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:28:28.282107   31107 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:28:28.449606   31107 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:28:28.550102   31107 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:28:28.754431   31107 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:28:28.832174   31107 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:28:28.832698   31107 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:28:28.854146   31107 out.go:204]   - Booting up control plane ...
	I0906 15:28:28.854236   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:28:28.854307   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:28:28.854356   31107 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:28:28.854422   31107 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:28:28.854553   31107 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:29:08.814065   31107 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:29:08.814802   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:08.815014   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:13.812888   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:13.813101   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:23.806911   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:23.807141   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:43.793507   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:43.793709   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:30:23.765663   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:30:23.765887   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:30:23.765900   31107 kubeadm.go:317] 
	I0906 15:30:23.765938   31107 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:30:23.765996   31107 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:30:23.766005   31107 kubeadm.go:317] 
	I0906 15:30:23.766040   31107 kubeadm.go:317] This error is likely caused by:
	I0906 15:30:23.766075   31107 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:30:23.766178   31107 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:30:23.766186   31107 kubeadm.go:317] 
	I0906 15:30:23.766312   31107 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:30:23.766393   31107 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:30:23.766452   31107 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:30:23.766465   31107 kubeadm.go:317] 
	I0906 15:30:23.766578   31107 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:30:23.766735   31107 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:30:23.766829   31107 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:30:23.766877   31107 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:30:23.767046   31107 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:30:23.767087   31107 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:30:23.771060   31107 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:30:23.771255   31107 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:30:23.771374   31107 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:30:23.771482   31107 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:30:23.771550   31107 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:30:23.771604   31107 kubeadm.go:398] StartCluster complete in 3m55.152363805s
	I0906 15:30:23.771677   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:30:23.806242   31107 logs.go:274] 0 containers: []
	W0906 15:30:23.806255   31107 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:30:23.806317   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:30:23.848707   31107 logs.go:274] 0 containers: []
	W0906 15:30:23.848719   31107 logs.go:276] No container was found matching "etcd"
	I0906 15:30:23.848775   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:30:23.892038   31107 logs.go:274] 0 containers: []
	W0906 15:30:23.892050   31107 logs.go:276] No container was found matching "coredns"
	I0906 15:30:23.892110   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:30:23.943992   31107 logs.go:274] 0 containers: []
	W0906 15:30:23.944011   31107 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:30:23.944086   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:30:23.980231   31107 logs.go:274] 0 containers: []
	W0906 15:30:23.980243   31107 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:30:23.980305   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:30:24.025173   31107 logs.go:274] 0 containers: []
	W0906 15:30:24.025189   31107 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:30:24.025247   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:30:24.063189   31107 logs.go:274] 0 containers: []
	W0906 15:30:24.063201   31107 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:30:24.063259   31107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:30:24.101348   31107 logs.go:274] 0 containers: []
	W0906 15:30:24.101360   31107 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:30:24.101368   31107 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:30:24.101380   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:30:24.174737   31107 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:30:24.174753   31107 logs.go:123] Gathering logs for Docker ...
	I0906 15:30:24.174760   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:30:24.195126   31107 logs.go:123] Gathering logs for container status ...
	I0906 15:30:24.195141   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:30:26.268719   31107 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.073563901s)
	I0906 15:30:26.268839   31107 logs.go:123] Gathering logs for kubelet ...
	I0906 15:30:26.268849   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:30:26.312124   31107 logs.go:123] Gathering logs for dmesg ...
	I0906 15:30:26.312142   31107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0906 15:30:26.325956   31107 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:30:26.325977   31107 out.go:239] * 
	* 
	W0906 15:30:26.326099   31107 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:30:26.326113   31107 out.go:239] * 
	* 
	W0906 15:30:26.326668   31107 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:30:26.440546   31107 out.go:177] 
	W0906 15:30:26.516526   31107 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:30:26.516738   31107 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:30:26.516856   31107 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:30:26.591194   31107 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220906152610-22187
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220906152610-22187: (1.624864605s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220906152610-22187 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220906152610-22187 status --format={{.Host}}: exit status 7 (117.317558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.25.0 --alsologtostderr -v=1 --driver=docker 
E0906 15:30:37.935654   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:30:44.329668   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.25.0 --alsologtostderr -v=1 --driver=docker : (30.803478885s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220906152610-22187 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (463.476901ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220906152610-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220906152610-22187
	    minikube start -p kubernetes-upgrade-20220906152610-22187 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220906152610-221872 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220906152610-22187 --kubernetes-version=v1.25.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.25.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220906152610-22187 --memory=2200 --kubernetes-version=v1.25.0 --alsologtostderr -v=1 --driver=docker : (18.995732256s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-09-06 15:31:18.751328 -0700 PDT m=+2844.067405270
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220906152610-22187
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220906152610-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786",
	        "Created": "2022-09-06T22:26:23.293445322Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 151597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:30:29.758453094Z",
	            "FinishedAt": "2022-09-06T22:30:27.20566122Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786/hostname",
	        "HostsPath": "/var/lib/docker/containers/e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786/hosts",
	        "LogPath": "/var/lib/docker/containers/e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786/e214e81fe87f81a3f6de598ed7ee4d1b906e371eba69766592b82e4278266786-json.log",
	        "Name": "/kubernetes-upgrade-20220906152610-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220906152610-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220906152610-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e7f1b270f0386ee708ae744495185aa54d3c8b579cac6f2087f28fab9a13eea9-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7f1b270f0386ee708ae744495185aa54d3c8b579cac6f2087f28fab9a13eea9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7f1b270f0386ee708ae744495185aa54d3c8b579cac6f2087f28fab9a13eea9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7f1b270f0386ee708ae744495185aa54d3c8b579cac6f2087f28fab9a13eea9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220906152610-22187",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220906152610-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220906152610-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220906152610-22187",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220906152610-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5823cc26fc5ffb44e26486a196e65ff7043ab9662a94cae5424fdd35786f2f18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58042"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58043"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58044"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58046"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5823cc26fc5f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220906152610-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e214e81fe87f",
	                        "kubernetes-upgrade-20220906152610-22187"
	                    ],
	                    "NetworkID": "37341955daebf07d36c9388249c4b42c90f18db2d2c40bfddda19445fd00a7f3",
	                    "EndpointID": "6e499432d7d3b957ae02024dd12f69b66da1329680084cda90117de0ac58ea72",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220906152610-22187 -n kubernetes-upgrade-20220906152610-22187
E0906 15:31:18.896452   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220906152610-22187 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220906152610-22187 logs -n 25: (3.069815582s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                        | insufficient-storage-20220906152509-22187 | jenkins | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|         | insufficient-storage-20220906152509-22187 |                                           |         |         |                     |                     |
	| start   | -p                                        | offline-docker-20220906152522-22187       | jenkins | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:26 PDT |
	|         | offline-docker-20220906152522-22187       |                                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                     |                     |
	|         | --driver=docker                           |                                           |         |         |                     |                     |
	| delete  | -p                                        | flannel-20220906152522-22187              | jenkins | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|         | flannel-20220906152522-22187              |                                           |         |         |                     |                     |
	| delete  | -p                                        | custom-flannel-20220906152522-22187       | jenkins | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|         | custom-flannel-20220906152522-22187       |                                           |         |         |                     |                     |
	| delete  | -p                                        | offline-docker-20220906152522-22187       | jenkins | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|         | offline-docker-20220906152522-22187       |                                           |         |         |                     |                     |
	| start   | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins | v1.26.1 | 06 Sep 22 15:26 PDT |                     |
	|         | kubernetes-upgrade-20220906152610-22187   |                                           |         |         |                     |                     |
	|         | --memory=2200                             |                                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0              |                                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                     |                     |
	| delete  | -p                                        | missing-upgrade-20220906152523-22187      | jenkins | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|         | missing-upgrade-20220906152523-22187      |                                           |         |         |                     |                     |
	| delete  | -p                                        | stopped-upgrade-20220906152634-22187      | jenkins | v1.26.1 | 06 Sep 22 15:27 PDT | 06 Sep 22 15:27 PDT |
	|         | stopped-upgrade-20220906152634-22187      |                                           |         |         |                     |                     |
	| delete  | -p                                        | running-upgrade-20220906152727-22187      | jenkins | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|         | running-upgrade-20220906152727-22187      |                                           |         |         |                     |                     |
	| start   | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|         | --memory=2048                             |                                           |         |         |                     |                     |
	|         | --install-addons=false                    |                                           |         |         |                     |                     |
	|         | --wait=all --driver=docker                |                                           |         |         |                     |                     |
	| start   | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:30 PDT |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                     |                     |
	|         | --driver=docker                           |                                           |         |         |                     |                     |
	| delete  | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:30 PDT |
	| start   | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT |                     |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|         | --no-kubernetes                           |                                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                 |                                           |         |         |                     |                     |
	|         | --driver=docker                           |                                           |         |         |                     |                     |
	| start   | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:30 PDT |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|         | --driver=docker                           |                                           |         |         |                     |                     |
	| stop    | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:30 PDT |
	|         | kubernetes-upgrade-20220906152610-22187   |                                           |         |         |                     |                     |
	| start   | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:30 PDT |
	|         | kubernetes-upgrade-20220906152610-22187   |                                           |         |         |                     |                     |
	|         | --memory=2200                             |                                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0              |                                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                     |                     |
	| start   | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:31 PDT |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker           |                                           |         |         |                     |                     |
	| start   | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT |                     |
	|         | kubernetes-upgrade-20220906152610-22187   |                                           |         |         |                     |                     |
	|         | --memory=2200                             |                                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0              |                                           |         |         |                     |                     |
	|         | --driver=docker                           |                                           |         |         |                     |                     |
	| start   | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins | v1.26.1 | 06 Sep 22 15:30 PDT | 06 Sep 22 15:31 PDT |
	|         | kubernetes-upgrade-20220906152610-22187   |                                           |         |         |                     |                     |
	|         | --memory=2200                             |                                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0              |                                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                     |                     |
	| delete  | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT | 06 Sep 22 15:31 PDT |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	| start   | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT | 06 Sep 22 15:31 PDT |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker           |                                           |         |         |                     |                     |
	| ssh     | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT |                     |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet          |                                           |         |         |                     |                     |
	|         | service kubelet                           |                                           |         |         |                     |                     |
	| profile | list                                      | minikube                                  | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT | 06 Sep 22 15:31 PDT |
	| profile | list --output=json                        | minikube                                  | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT | 06 Sep 22 15:31 PDT |
	| stop    | -p                                        | NoKubernetes-20220906153018-22187         | jenkins | v1.26.1 | 06 Sep 22 15:31 PDT |                     |
	|         | NoKubernetes-20220906153018-22187         |                                           |         |         |                     |                     |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:31:06
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:31:06.645463   32844 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:31:06.645654   32844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:31:06.645656   32844 out.go:309] Setting ErrFile to fd 2...
	I0906 15:31:06.645663   32844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:31:06.645768   32844 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:31:06.646248   32844 out.go:303] Setting JSON to false
	I0906 15:31:06.661113   32844 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9037,"bootTime":1662494429,"procs":332,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:31:06.661231   32844 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:31:06.683517   32844 out.go:177] * [NoKubernetes-20220906153018-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:31:06.705529   32844 notify.go:193] Checking for updates...
	I0906 15:31:06.727391   32844 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:31:06.749215   32844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:31:06.770384   32844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:31:06.791674   32844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:31:06.813568   32844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:31:06.835738   32844 config.go:180] Loaded profile config "kubernetes-upgrade-20220906152610-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:31:06.835789   32844 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0906 15:31:06.835822   32844 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:31:06.904917   32844 docker.go:137] docker version: linux-20.10.17
	I0906 15:31:06.905064   32844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:31:07.039512   32844 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-09-06 22:31:06.964220441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:31:07.081850   32844 out.go:177] * Using the docker driver based on user configuration
	I0906 15:31:07.102837   32844 start.go:284] selected driver: docker
	I0906 15:31:07.102846   32844 start.go:808] validating driver "docker" against <nil>
	I0906 15:31:07.102860   32844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:31:07.102999   32844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:31:07.238392   32844 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-09-06 22:31:07.161866242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:31:07.238495   32844 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0906 15:31:07.238504   32844 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0906 15:31:07.238511   32844 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 15:31:07.240743   32844 start_flags.go:377] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0906 15:31:07.240883   32844 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 15:31:07.262585   32844 out.go:177] * Using Docker Desktop driver with root privileges
	I0906 15:31:07.283110   32844 cni.go:95] Creating CNI manager for ""
	I0906 15:31:07.283147   32844 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:31:07.283161   32844 start_flags.go:310] config:
	{Name:NoKubernetes-20220906153018-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:NoKubernetes-20220906153018-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:31:07.283227   32844 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0906 15:31:07.304080   32844 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-20220906153018-22187
	I0906 15:31:07.346166   32844 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:31:07.367168   32844 out.go:177] * Pulling base image ...
	I0906 15:31:07.388112   32844 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0906 15:31:07.388136   32844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:31:07.452900   32844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:31:07.452917   32844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	W0906 15:31:07.463740   32844 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0906 15:31:07.463890   32844 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/NoKubernetes-20220906153018-22187/config.json ...
	I0906 15:31:07.463942   32844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/NoKubernetes-20220906153018-22187/config.json: {Name:mk11db33699722e03ed462b9caaaf6b00b20f661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:31:07.464208   32844 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:31:07.464239   32844 start.go:364] acquiring machines lock for NoKubernetes-20220906153018-22187: {Name:mk7ddc22e15cba12eb2bf1094203351e79e4bed5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:31:07.464279   32844 start.go:368] acquired machines lock for "NoKubernetes-20220906153018-22187" in 34.2µs
	I0906 15:31:07.464294   32844 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-20220906153018-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-20220906153018-22187 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{
Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:31:07.464342   32844 start.go:125] createHost starting for "" (driver="docker")
	I0906 15:31:04.813153   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:31:04.872306   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:31:04.908805   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:31:04.933330   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:31:04.952678   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:31:05.031343   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:31:05.116119   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:31:05.139492   32743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:31:05.222866   32743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:31:05.310440   32743 ssh_runner.go:195] Run: openssl version
	I0906 15:31:05.317577   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:31:05.327210   32743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:31:05.337492   32743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:31:05.337564   32743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:31:05.345243   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:31:05.416672   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:31:05.424898   32743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:31:05.428977   32743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:31:05.429023   32743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:31:05.434320   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:31:05.442572   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:31:05.511839   32743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:31:05.517795   32743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:31:05.517863   32743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:31:05.529105   32743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:31:05.548325   32743 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-20220906152610-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:kubernetes-upgrade-20220906152610-22187 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:31:05.548425   32743 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:31:05.643780   32743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:31:05.708884   32743 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:31:05.708905   32743 kubeadm.go:627] restartCluster start
	I0906 15:31:05.708962   32743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:31:05.720636   32743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:31:05.720716   32743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:31:05.794892   32743 kubeconfig.go:92] found "kubernetes-upgrade-20220906152610-22187" server: "https://127.0.0.1:58046"
	I0906 15:31:05.795450   32743 kapi.go:59] client config for kubernetes-upgrade-20220906152610-22187: &rest.Config{Host:"https://127.0.0.1:58046", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kuber
netes-upgrade-20220906152610-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:31:05.795972   32743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:31:05.813330   32743 api_server.go:165] Checking apiserver status ...
	I0906 15:31:05.813408   32743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:31:05.823794   32743 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2950/cgroup
	W0906 15:31:05.836730   32743 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2950/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:31:05.836759   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:08.608961   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:31:08.609001   32743 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:58046/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:31:08.872146   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:08.877448   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:08.877467   32743 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:58046/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:09.258838   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:09.265129   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:09.265150   32743 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:58046/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:09.688082   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:09.693817   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:09.693843   32743 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:58046/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:07.485656   32844 out.go:204] * Creating docker container (CPUs=2, Memory=5895MB) ...
	I0906 15:31:07.485914   32844 start.go:159] libmachine.API.Create for "NoKubernetes-20220906153018-22187" (driver="docker")
	I0906 15:31:07.485947   32844 client.go:168] LocalClient.Create starting
	I0906 15:31:07.486079   32844 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem
	I0906 15:31:07.486123   32844 main.go:134] libmachine: Decoding PEM data...
	I0906 15:31:07.486138   32844 main.go:134] libmachine: Parsing certificate...
	I0906 15:31:07.486228   32844 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem
	I0906 15:31:07.486253   32844 main.go:134] libmachine: Decoding PEM data...
	I0906 15:31:07.486262   32844 main.go:134] libmachine: Parsing certificate...
	I0906 15:31:07.506391   32844 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220906153018-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 15:31:07.572583   32844 cli_runner.go:211] docker network inspect NoKubernetes-20220906153018-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 15:31:07.572667   32844 network_create.go:272] running [docker network inspect NoKubernetes-20220906153018-22187] to gather additional debugging logs...
	I0906 15:31:07.572680   32844 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220906153018-22187
	W0906 15:31:07.634030   32844 cli_runner.go:211] docker network inspect NoKubernetes-20220906153018-22187 returned with exit code 1
	I0906 15:31:07.634050   32844 network_create.go:275] error running [docker network inspect NoKubernetes-20220906153018-22187]: docker network inspect NoKubernetes-20220906153018-22187: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220906153018-22187
	I0906 15:31:07.634063   32844 network_create.go:277] output of [docker network inspect NoKubernetes-20220906153018-22187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220906153018-22187
	
	** /stderr **
	I0906 15:31:07.634142   32844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 15:31:07.699644   32844 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000528208] misses:0}
	I0906 15:31:07.699677   32844 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.699689   32844 network_create.go:115] attempt to create docker network NoKubernetes-20220906153018-22187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 15:31:07.699756   32844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187
	W0906 15:31:07.767090   32844 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187 returned with exit code 1
	W0906 15:31:07.767146   32844 network_create.go:107] failed to create docker network NoKubernetes-20220906153018-22187 192.168.49.0/24, will retry: subnet is taken
	I0906 15:31:07.767412   32844 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:false}} dirty:map[] misses:0}
	I0906 15:31:07.767430   32844 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.767656   32844 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:true}} dirty:map[192.168.49.0:0xc000528208 192.168.58.0:0xc000528340] misses:0}
	I0906 15:31:07.767670   32844 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.767678   32844 network_create.go:115] attempt to create docker network NoKubernetes-20220906153018-22187 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0906 15:31:07.767747   32844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187
	W0906 15:31:07.835870   32844 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187 returned with exit code 1
	W0906 15:31:07.835901   32844 network_create.go:107] failed to create docker network NoKubernetes-20220906153018-22187 192.168.58.0/24, will retry: subnet is taken
	I0906 15:31:07.836211   32844 network.go:281] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:true}} dirty:map[192.168.49.0:0xc000528208 192.168.58.0:0xc000528340] misses:1}
	I0906 15:31:07.836225   32844 network.go:239] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.836441   32844 network.go:290] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:true}} dirty:map[192.168.49.0:0xc000528208 192.168.58.0:0xc000528340 192.168.67.0:0xc0009ca870] misses:1}
	I0906 15:31:07.836453   32844 network.go:236] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.836463   32844 network_create.go:115] attempt to create docker network NoKubernetes-20220906153018-22187 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0906 15:31:07.836535   32844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187
	W0906 15:31:07.901050   32844 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187 returned with exit code 1
	W0906 15:31:07.901085   32844 network_create.go:107] failed to create docker network NoKubernetes-20220906153018-22187 192.168.67.0/24, will retry: subnet is taken
	I0906 15:31:07.901350   32844 network.go:281] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:true}} dirty:map[192.168.49.0:0xc000528208 192.168.58.0:0xc000528340 192.168.67.0:0xc0009ca870] misses:2}
	I0906 15:31:07.901363   32844 network.go:239] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.901565   32844 network.go:290] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000528208] amended:true}} dirty:map[192.168.49.0:0xc000528208 192.168.58.0:0xc000528340 192.168.67.0:0xc0009ca870 192.168.76.0:0xc0005283b8] misses:2}
	I0906 15:31:07.901575   32844 network.go:236] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:31:07.901581   32844 network_create.go:115] attempt to create docker network NoKubernetes-20220906153018-22187 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0906 15:31:07.901636   32844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 NoKubernetes-20220906153018-22187
	I0906 15:31:08.003667   32844 network_create.go:99] docker network NoKubernetes-20220906153018-22187 192.168.76.0/24 created
	I0906 15:31:08.003692   32844 kic.go:106] calculated static IP "192.168.76.2" for the "NoKubernetes-20220906153018-22187" container
	I0906 15:31:08.003797   32844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 15:31:08.073502   32844 cli_runner.go:164] Run: docker volume create NoKubernetes-20220906153018-22187 --label name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 --label created_by.minikube.sigs.k8s.io=true
	I0906 15:31:08.140736   32844 oci.go:103] Successfully created a docker volume NoKubernetes-20220906153018-22187
	I0906 15:31:08.140845   32844 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-20220906153018-22187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 --entrypoint /usr/bin/test -v NoKubernetes-20220906153018-22187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -d /var/lib
	I0906 15:31:08.576790   32844 oci.go:107] Successfully prepared a docker volume NoKubernetes-20220906153018-22187
	I0906 15:31:08.576824   32844 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0906 15:31:08.577386   32844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 15:31:08.715120   32844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-20220906153018-22187 --name NoKubernetes-20220906153018-22187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-20220906153018-22187 --network NoKubernetes-20220906153018-22187 --ip 192.168.76.2 --volume NoKubernetes-20220906153018-22187:/var --security-opt apparmor=unconfined --memory=5895mb --memory-swap=5895mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d
	I0906 15:31:09.157800   32844 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220906153018-22187 --format={{.State.Running}}
	I0906 15:31:09.223849   32844 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220906153018-22187 --format={{.State.Status}}
	I0906 15:31:09.292907   32844 cli_runner.go:164] Run: docker exec NoKubernetes-20220906153018-22187 stat /var/lib/dpkg/alternatives/iptables
	I0906 15:31:09.406679   32844 oci.go:144] the created container "NoKubernetes-20220906153018-22187" has a running status.
	I0906 15:31:09.406703   32844 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa...
	I0906 15:31:09.668612   32844 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 15:31:09.783169   32844 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220906153018-22187 --format={{.State.Status}}
	I0906 15:31:09.846458   32844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 15:31:09.846478   32844 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-20220906153018-22187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 15:31:09.959476   32844 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220906153018-22187 --format={{.State.Status}}
	I0906 15:31:10.022383   32844 machine.go:88] provisioning docker machine ...
	I0906 15:31:10.022421   32844 ubuntu.go:169] provisioning hostname "NoKubernetes-20220906153018-22187"
	I0906 15:31:10.022514   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:10.085777   32844 main.go:134] libmachine: Using SSH client type: native
	I0906 15:31:10.085978   32844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 58176 <nil> <nil>}
	I0906 15:31:10.085992   32844 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-20220906153018-22187 && echo "NoKubernetes-20220906153018-22187" | sudo tee /etc/hostname
	I0906 15:31:10.205220   32844 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-20220906153018-22187
	
	I0906 15:31:10.205285   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:10.276731   32844 main.go:134] libmachine: Using SSH client type: native
	I0906 15:31:10.276892   32844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 58176 <nil> <nil>}
	I0906 15:31:10.276904   32844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-20220906153018-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-20220906153018-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-20220906153018-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:31:10.395894   32844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:31:10.395907   32844 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:31:10.395924   32844 ubuntu.go:177] setting up certificates
	I0906 15:31:10.395930   32844 provision.go:83] configureAuth start
	I0906 15:31:10.395987   32844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220906153018-22187
	I0906 15:31:10.465257   32844 provision.go:138] copyHostCerts
	I0906 15:31:10.465360   32844 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:31:10.465367   32844 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:31:10.465462   32844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:31:10.465651   32844 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:31:10.465659   32844 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:31:10.465720   32844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:31:10.465877   32844 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:31:10.465880   32844 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:31:10.465942   32844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:31:10.466053   32844 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-20220906153018-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-20220906153018-22187]
	I0906 15:31:10.545180   32844 provision.go:172] copyRemoteCerts
	I0906 15:31:10.545229   32844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:31:10.545274   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:10.615572   32844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58176 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa Username:docker}
	I0906 15:31:10.708011   32844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:31:10.732237   32844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:31:10.753322   32844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0906 15:31:10.774293   32844 provision.go:86] duration metric: configureAuth took 378.341225ms
	I0906 15:31:10.774303   32844 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:31:10.774431   32844 config.go:180] Loaded profile config "NoKubernetes-20220906153018-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0906 15:31:10.774481   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:10.844105   32844 main.go:134] libmachine: Using SSH client type: native
	I0906 15:31:10.844261   32844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 58176 <nil> <nil>}
	I0906 15:31:10.844288   32844 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:31:10.958286   32844 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:31:10.958293   32844 ubuntu.go:71] root file system type: overlay
	I0906 15:31:10.958461   32844 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:31:10.958537   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:11.023092   32844 main.go:134] libmachine: Using SSH client type: native
	I0906 15:31:11.023261   32844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 58176 <nil> <nil>}
	I0906 15:31:11.023312   32844 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:31:11.142864   32844 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:31:11.142945   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:11.209534   32844 main.go:134] libmachine: Using SSH client type: native
	I0906 15:31:11.209704   32844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 58176 <nil> <nil>}
	I0906 15:31:11.209713   32844 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:31:11.811456   32844 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:31:11.144286586 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 15:31:11.811468   32844 machine.go:91] provisioned docker machine in 1.789072688s
	I0906 15:31:11.811473   32844 client.go:171] LocalClient.Create took 4.325522713s
	I0906 15:31:11.811489   32844 start.go:167] duration metric: libmachine.API.Create for "NoKubernetes-20220906153018-22187" took 4.325575104s
	I0906 15:31:11.811498   32844 start.go:300] post-start starting for "NoKubernetes-20220906153018-22187" (driver="docker")
	I0906 15:31:11.811502   32844 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:31:11.811556   32844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:31:11.811601   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:11.876498   32844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58176 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa Username:docker}
	I0906 15:31:11.959389   32844 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:31:11.963358   32844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:31:11.963372   32844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:31:11.963381   32844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:31:11.963387   32844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:31:11.963395   32844 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:31:11.963496   32844 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:31:11.963639   32844 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:31:11.963785   32844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:31:11.971892   32844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:31:11.992090   32844 start.go:303] post-start completed in 180.581841ms
	I0906 15:31:11.992800   32844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220906153018-22187
	I0906 15:31:12.085652   32844 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/NoKubernetes-20220906153018-22187/config.json ...
	I0906 15:31:12.086310   32844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:31:12.086397   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:12.167248   32844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58176 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa Username:docker}
	I0906 15:31:12.251387   32844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:31:12.258316   32844 start.go:128] duration metric: createHost completed in 4.793964897s
	I0906 15:31:12.258330   32844 start.go:83] releasing machines lock for "NoKubernetes-20220906153018-22187", held for 4.794045197s
	I0906 15:31:12.258421   32844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220906153018-22187
	I0906 15:31:12.324904   32844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:31:12.324991   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:12.325104   32844 ssh_runner.go:195] Run: systemctl --version
	I0906 15:31:12.325561   32844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220906153018-22187
	I0906 15:31:12.397869   32844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58176 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa Username:docker}
	I0906 15:31:12.397873   32844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58176 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/NoKubernetes-20220906153018-22187/id_rsa Username:docker}
	I0906 15:31:12.627331   32844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:31:12.637095   32844 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:31:12.637146   32844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:31:12.646264   32844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:31:12.659287   32844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:31:12.731756   32844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:31:12.798870   32844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:31:12.875593   32844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:31:13.134201   32844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:31:13.176044   32844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:31:13.255930   32844 out.go:204] * Preparing Docker 20.10.17 ...
	I0906 15:31:13.278270   32844 out.go:177] * Done! minikube is ready without Kubernetes!
	I0906 15:31:13.321518   32844 out.go:177] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:31:10.167363   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:10.173137   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 200:
	ok
	I0906 15:31:10.186623   32743 system_pods.go:86] 5 kube-system pods found
	I0906 15:31:10.186639   32743 system_pods.go:89] "etcd-kubernetes-upgrade-20220906152610-22187" [47f63356-0ec6-43a6-ae86-c717d0b9aeec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:31:10.186651   32743 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220906152610-22187" [019a0774-10b6-4875-9dbb-a86098d3b701] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:31:10.186659   32743 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220906152610-22187" [4375b33f-8df2-4996-b5ae-5be61a0f3b10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:31:10.186668   32743 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220906152610-22187" [bfbe21d2-a78f-4e42-879d-1bf8b273500e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:31:10.186674   32743 system_pods.go:89] "storage-provisioner" [d364f749-72ec-4ed6-be3a-9ae140deb3f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0906 15:31:10.186681   32743 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I0906 15:31:10.186688   32743 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:31:10.186744   32743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:31:10.220048   32743 docker.go:443] Stopping containers: [a9c5e2002811 90ed4637296d 0ebcd4c4ff56 c68c1554f096 60ccfe1a5edd 1407c31449df c9955c428b23 324b77f8e359 59c722bf8464 1b01faf94945 9ebccee9fe98 c76db9ec9833 37df99b974f0 db035f3c1de0 b74700b5c8e7 abc113877022 eedf3fbd3f36]
	I0906 15:31:10.220125   32743 ssh_runner.go:195] Run: docker stop a9c5e2002811 90ed4637296d 0ebcd4c4ff56 c68c1554f096 60ccfe1a5edd 1407c31449df c9955c428b23 324b77f8e359 59c722bf8464 1b01faf94945 9ebccee9fe98 c76db9ec9833 37df99b974f0 db035f3c1de0 b74700b5c8e7 abc113877022 eedf3fbd3f36
	I0906 15:31:10.718393   32743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:31:10.815258   32743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:31:10.827694   32743 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5763 Sep  6 22:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5799 Sep  6 22:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5963 Sep  6 22:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Sep  6 22:28 /etc/kubernetes/scheduler.conf
	
	I0906 15:31:10.827768   32743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:31:10.839792   32743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:31:10.849931   32743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:31:10.861641   32743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:31:10.870551   32743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:31:10.880970   32743 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:31:10.880994   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:10.947962   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:11.938632   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:12.122380   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:12.187597   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:12.303209   32743 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:31:12.303278   32743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:31:12.816305   32743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:31:13.317610   32743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:31:13.327779   32743 api_server.go:71] duration metric: took 1.024577022s to wait for apiserver process to appear ...
	I0906 15:31:13.327798   32743 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:31:13.327808   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:15.907940   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:31:15.907954   32743 api_server.go:102] status: https://127.0.0.1:58046/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:31:16.408075   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:16.413244   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:31:16.413260   32743 api_server.go:102] status: https://127.0.0.1:58046/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:16.908233   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:16.913528   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:31:16.913542   32743 api_server.go:102] status: https://127.0.0.1:58046/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:31:17.408061   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:17.414263   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 200:
	ok
	I0906 15:31:17.420223   32743 api_server.go:140] control plane version: v1.25.0
	I0906 15:31:17.420233   32743 api_server.go:130] duration metric: took 4.092428878s to wait for apiserver health ...
	I0906 15:31:17.420240   32743 cni.go:95] Creating CNI manager for ""
	I0906 15:31:17.420245   32743 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:31:17.420254   32743 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:31:17.425166   32743 system_pods.go:59] 5 kube-system pods found
	I0906 15:31:17.425178   32743 system_pods.go:61] "etcd-kubernetes-upgrade-20220906152610-22187" [47f63356-0ec6-43a6-ae86-c717d0b9aeec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:31:17.425197   32743 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220906152610-22187" [019a0774-10b6-4875-9dbb-a86098d3b701] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:31:17.425203   32743 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220906152610-22187" [4375b33f-8df2-4996-b5ae-5be61a0f3b10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:31:17.425213   32743 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220906152610-22187" [bfbe21d2-a78f-4e42-879d-1bf8b273500e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:31:17.425220   32743 system_pods.go:61] "storage-provisioner" [d364f749-72ec-4ed6-be3a-9ae140deb3f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0906 15:31:17.425224   32743 system_pods.go:74] duration metric: took 4.965318ms to wait for pod list to return data ...
	I0906 15:31:17.425231   32743 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:31:17.427697   32743 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:31:17.427709   32743 node_conditions.go:123] node cpu capacity is 6
	I0906 15:31:17.427719   32743 node_conditions.go:105] duration metric: took 2.484403ms to run NodePressure ...
	I0906 15:31:17.427730   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:31:17.580866   32743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:31:17.588408   32743 ops.go:34] apiserver oom_adj: -16
	I0906 15:31:17.588418   32743 kubeadm.go:631] restartCluster took 11.879506543s
	I0906 15:31:17.588425   32743 kubeadm.go:398] StartCluster complete in 12.040106669s
	I0906 15:31:17.588437   32743 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:31:17.588505   32743 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:31:17.588941   32743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:31:17.589536   32743 kapi.go:59] client config for kubernetes-upgrade-20220906152610-22187: &rest.Config{Host:"https://127.0.0.1:58046", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kuber
netes-upgrade-20220906152610-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:31:17.591953   32743 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220906152610-22187" rescaled to 1
	I0906 15:31:17.591990   32743 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:31:17.591998   32743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:31:17.592039   32743 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0906 15:31:17.592186   32743 config.go:180] Loaded profile config "kubernetes-upgrade-20220906152610-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:31:17.612313   32743 out.go:177] * Verifying Kubernetes components...
	I0906 15:31:17.612398   32743 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220906152610-22187"
	I0906 15:31:17.612391   32743 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220906152610-22187"
	I0906 15:31:17.649586   32743 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220906152610-22187"
	I0906 15:31:17.649588   32743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0906 15:31:17.649601   32743 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:31:17.649583   32743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220906152610-22187"
	I0906 15:31:17.649673   32743 host.go:66] Checking if "kubernetes-upgrade-20220906152610-22187" exists ...
	I0906 15:31:17.649970   32743 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:31:17.649984   32743 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:31:17.658526   32743 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:31:17.663370   32743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:31:17.726082   32743 kapi.go:59] client config for kubernetes-upgrade-20220906152610-22187: &rest.Config{Host:"https://127.0.0.1:58046", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubernetes-upgrade-20220906152610-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kuber
netes-upgrade-20220906152610-22187/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:31:17.747534   32743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:31:17.754009   32743 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220906152610-22187"
	W0906 15:31:17.769190   32743 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:31:17.769248   32743 host.go:66] Checking if "kubernetes-upgrade-20220906152610-22187" exists ...
	I0906 15:31:17.769363   32743 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:31:17.769393   32743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:31:17.769556   32743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:31:17.771253   32743 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220906152610-22187 --format={{.State.Status}}
	I0906 15:31:17.778245   32743 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:31:17.778314   32743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:31:17.789333   32743 api_server.go:71] duration metric: took 197.317697ms to wait for apiserver process to appear ...
	I0906 15:31:17.789348   32743 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:31:17.789358   32743 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58046/healthz ...
	I0906 15:31:17.795875   32743 api_server.go:266] https://127.0.0.1:58046/healthz returned 200:
	ok
	I0906 15:31:17.797490   32743 api_server.go:140] control plane version: v1.25.0
	I0906 15:31:17.797501   32743 api_server.go:130] duration metric: took 8.148518ms to wait for apiserver health ...
	I0906 15:31:17.797506   32743 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:31:17.804113   32743 system_pods.go:59] 5 kube-system pods found
	I0906 15:31:17.804135   32743 system_pods.go:61] "etcd-kubernetes-upgrade-20220906152610-22187" [47f63356-0ec6-43a6-ae86-c717d0b9aeec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:31:17.804146   32743 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220906152610-22187" [019a0774-10b6-4875-9dbb-a86098d3b701] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:31:17.804151   32743 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220906152610-22187" [4375b33f-8df2-4996-b5ae-5be61a0f3b10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:31:17.804159   32743 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220906152610-22187" [bfbe21d2-a78f-4e42-879d-1bf8b273500e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:31:17.804185   32743 system_pods.go:61] "storage-provisioner" [d364f749-72ec-4ed6-be3a-9ae140deb3f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0906 15:31:17.804193   32743 system_pods.go:74] duration metric: took 6.682588ms to wait for pod list to return data ...
	I0906 15:31:17.804200   32743 kubeadm.go:573] duration metric: took 212.189087ms to wait for : map[apiserver:true system_pods:true] ...
	I0906 15:31:17.804211   32743 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:31:17.807242   32743 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:31:17.807254   32743 node_conditions.go:123] node cpu capacity is 6
	I0906 15:31:17.807265   32743 node_conditions.go:105] duration metric: took 3.041526ms to run NodePressure ...
	I0906 15:31:17.807293   32743 start.go:216] waiting for startup goroutines ...
	I0906 15:31:17.848655   32743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58042 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:31:17.849541   32743 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:31:17.849551   32743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:31:17.849602   32743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220906152610-22187
	I0906 15:31:17.920291   32743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58042 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/kubernetes-upgrade-20220906152610-22187/id_rsa Username:docker}
	I0906 15:31:17.944511   32743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:31:18.020662   32743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:31:18.602506   32743 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:31:18.624290   32743 addons.go:414] enableAddons completed in 1.032274206s
	I0906 15:31:18.679661   32743 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:31:18.701129   32743 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220906152610-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:30:29 UTC, end at Tue 2022-09-06 22:31:20 UTC. --
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.708579623Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.708659972Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.708714093Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.708774628Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.709591242Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.709648192Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.709724352Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.709764464Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.713251620Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.718606483Z" level=info msg="Loading containers: start."
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.813021679Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.849646729Z" level=info msg="Loading containers: done."
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.859033961Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.859099544Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.880866825Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:31:03 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:03.886606043Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.334468887Z" level=info msg="ignoring event" container=1407c31449dfba9c076a0d574d09a38382d89a587198ed92980799496f684794 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.338974560Z" level=info msg="ignoring event" container=c9955c428b238b224bb43144323c93844cba79ad7aca65259b6d48c53678cbf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.343071587Z" level=info msg="ignoring event" container=0ebcd4c4ff568d15a7e759fd9ac161ada8ed5f1dae84ff96511d59314b43c2b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.345568024Z" level=info msg="ignoring event" container=c68c1554f096f6ed24aeceb560e5eb3d541b6d3e473a66fff9b32d4164dcd142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.347102478Z" level=info msg="ignoring event" container=324b77f8e359477a121d94ef3a16b1aa5978ad037ab850b112d0d34524ce0158 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.348846376Z" level=info msg="ignoring event" container=60ccfe1a5eddf8f36d479a8cb93b642001147b8b57ca304ac5ca98d49b0174e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.353996219Z" level=info msg="ignoring event" container=a9c5e200281191e2ff521103b332c2c2b722e9713a5ced5b3aade88f8aa32c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:31:10 kubernetes-upgrade-20220906152610-22187 dockerd[2346]: time="2022-09-06T22:31:10.651057152Z" level=info msg="ignoring event" container=90ed4637296dfa5b9478ce71eeefa0591647a56490fc4d543dabf69489f1177f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	3fa7bc8050a72       bef2cf3115095       8 seconds ago       Running             kube-scheduler            2                   6087025c49f62
	ec69497462026       1a54c86c03a67       8 seconds ago       Running             kube-controller-manager   2                   f863f6332a377
	9714d73dd7e53       4d2edfd10d3e3       8 seconds ago       Running             kube-apiserver            2                   5697c485bec01
	e91fcfccc8a25       a8a176a5d5d69       8 seconds ago       Running             etcd                      2                   e7c9120766b5d
	a9c5e20028119       a8a176a5d5d69       16 seconds ago      Exited              etcd                      1                   c9955c428b238
	90ed4637296df       4d2edfd10d3e3       16 seconds ago      Exited              kube-apiserver            1                   324b77f8e3594
	0ebcd4c4ff568       bef2cf3115095       16 seconds ago      Exited              kube-scheduler            1                   1407c31449dfb
	c68c1554f096f       1a54c86c03a67       16 seconds ago      Exited              kube-controller-manager   1                   60ccfe1a5eddf
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220906152610-22187
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220906152610-22187
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:30:49 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220906152610-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:31:16 +0000   Tue, 06 Sep 2022 22:30:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:31:16 +0000   Tue, 06 Sep 2022 22:30:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:31:16 +0000   Tue, 06 Sep 2022 22:30:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:31:16 +0000   Tue, 06 Sep 2022 22:31:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-20220906152610-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                bc484242-406d-46fa-ac8e-082691d27e12
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220906152610-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220906152610-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220906152610-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220906152610-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-20220906152610-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-20220906152610-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet  Node kubernetes-upgrade-20220906152610-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001536] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001105] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001751] FS-Cache: N-cookie d=000000006f57a5f8 n=0000000004119ae2
	[  +0.001424] FS-Cache: N-key=[8] '89c5800300000000'
	[  +0.002109] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000d596ead8 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001797] FS-Cache: O-cookie d=000000006f57a5f8 n=00000000f83b458d
	[  +0.001466] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001134] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001810] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001458] FS-Cache: N-key=[8] '89c5800300000000'
	[  +3.680989] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=000000003a8c8805 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000057637cac
	[  +0.001460] FS-Cache: O-key=[8] '88c5800300000000'
	[  +0.001144] FS-Cache: N-cookie c=000000000ab19587 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001761] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001454] FS-Cache: N-key=[8] '88c5800300000000'
	[  +0.676412] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000dd15d770 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000060e892c8
	[  +0.001441] FS-Cache: O-key=[8] '93c5800300000000'
	[  +0.001122] FS-Cache: N-cookie c=00000000e728d4f6 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=000000006f57a5f8 n=000000009b87565f
	[  +0.001438] FS-Cache: N-key=[8] '93c5800300000000'
	
	* 
	* ==> etcd [a9c5e2002811] <==
	* {"level":"info","ts":"2022-09-06T22:31:05.264Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-06T22:31:05.264Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:31:05.264Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:06.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:06.935Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-20220906152610-22187 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:31:06.935Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:31:06.935Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:31:06.935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:31:06.935Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:31:06.937Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:31:06.937Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-09-06T22:31:10.307Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:31:10.307Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-20220906152610-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/09/06 22:31:10 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:31:10 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:31:10.319Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-09-06T22:31:10.320Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-06T22:31:10.322Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-06T22:31:10.322Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-20220906152610-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [e91fcfccc8a2] <==
	* {"level":"info","ts":"2022-09-06T22:31:13.054Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:31:13.056Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:31:13.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-09-06T22:31:13.056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:31:13.056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:31:13.056Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:31:13.057Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:31:13.057Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-06T22:31:13.057Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-06T22:31:13.057Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:31:13.057Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-09-06T22:31:14.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-09-06T22:31:14.350Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-20220906152610-22187 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:31:14.350Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:31:14.351Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:31:14.351Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:31:14.351Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:31:14.352Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:31:14.352Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:31:21 up 47 min,  0 users,  load average: 2.09, 1.15, 0.76
	Linux kubernetes-upgrade-20220906152610-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [90ed4637296d] <==
	* I0906 22:31:10.314352       1 controller.go:122] Shutting down OpenAPI controller
	I0906 22:31:10.314346       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0906 22:31:10.314362       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I0906 22:31:10.314367       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0906 22:31:10.314374       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0906 22:31:10.314380       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0906 22:31:10.314386       1 establishing_controller.go:87] Shutting down EstablishingController
	I0906 22:31:10.314391       1 naming_controller.go:302] Shutting down NamingConditionController
	I0906 22:31:10.314397       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0906 22:31:10.314404       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0906 22:31:10.314411       1 available_controller.go:503] Shutting down AvailableConditionController
	I0906 22:31:10.314420       1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
	I0906 22:31:10.314430       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0906 22:31:10.314442       1 apf_controller.go:309] Shutting down API Priority and Fairness config worker
	I0906 22:31:10.314464       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0906 22:31:10.314476       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:31:10.314484       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0906 22:31:10.314489       1 controller.go:89] Shutting down OpenAPI AggregationController
	I0906 22:31:10.314497       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0906 22:31:10.314504       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0906 22:31:10.314528       1 secure_serving.go:255] Stopped listening on [::]:8443
	I0906 22:31:10.314564       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:31:10.314596       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0906 22:31:10.314691       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0906 22:31:10.314731       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	* 
	* ==> kube-apiserver [9714d73dd7e5] <==
	* I0906 22:31:15.915373       1 establishing_controller.go:76] Starting EstablishingController
	I0906 22:31:15.915381       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 22:31:15.915388       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 22:31:15.915397       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 22:31:15.917409       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0906 22:31:15.917475       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0906 22:31:15.913908       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0906 22:31:15.917546       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0906 22:31:15.913918       1 available_controller.go:491] Starting AvailableConditionController
	I0906 22:31:15.917797       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0906 22:31:15.935173       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:31:15.940572       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:31:16.009367       1 cache.go:39] Caches are synced for autoregister controller
	I0906 22:31:16.009410       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:31:16.009890       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:31:16.018693       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 22:31:16.018765       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:31:16.019820       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 22:31:16.715408       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:31:16.916243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:31:17.499361       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:31:17.505944       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:31:17.561905       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:31:17.576711       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:31:17.580995       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [c68c1554f096] <==
	* I0906 22:31:05.865250       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:31:06.216995       1 controllermanager.go:178] Version: v1.25.0
	I0906 22:31:06.217037       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:31:06.217996       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 22:31:06.218120       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:31:06.218604       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 22:31:06.218839       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [ec6949746202] <==
	* I0906 22:31:18.013426       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
	I0906 22:31:18.013446       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0906 22:31:18.013503       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I0906 22:31:18.013521       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps
	I0906 22:31:18.013535       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0906 22:31:18.013549       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
	W0906 22:31:18.013557       1 shared_informer.go:533] resyncPeriod 15h49m38.71970978s is smaller than resyncCheckPeriod 23h57m14.404696192s and the informer has already started. Changing it to 23h57m14.404696192s
	I0906 22:31:18.013582       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts
	I0906 22:31:18.013628       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps
	I0906 22:31:18.013649       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0906 22:31:18.013683       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch
	I0906 22:31:18.013696       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0906 22:31:18.013711       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps
	I0906 22:31:18.013725       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps
	I0906 22:31:18.013733       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0906 22:31:18.013754       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0906 22:31:18.013791       1 controllermanager.go:602] Started "resourcequota"
	I0906 22:31:18.013918       1 resource_quota_controller.go:277] Starting resource quota controller
	I0906 22:31:18.014121       1 shared_informer.go:255] Waiting for caches to sync for resource quota
	I0906 22:31:18.014136       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0906 22:31:18.015718       1 controllermanager.go:602] Started "csrapproving"
	I0906 22:31:18.015889       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0906 22:31:18.016122       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
	I0906 22:31:18.022128       1 node_ipam_controller.go:91] Sending events to api server.
	I0906 22:31:18.048875       1 shared_informer.go:262] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [0ebcd4c4ff56] <==
	* I0906 22:31:05.779144       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:31:08.621962       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:31:08.621998       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:31:08.622006       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:31:08.622011       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:31:08.631134       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:31:08.631183       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:31:08.633082       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:31:08.633115       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:31:08.633116       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:31:08.633343       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:31:08.734284       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:31:10.265589       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0906 22:31:10.266520       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0906 22:31:10.266873       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0906 22:31:10.267115       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0906 22:31:10.267452       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [3fa7bc8050a7] <==
	* I0906 22:31:13.834173       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:31:15.949086       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:31:15.949118       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:31:15.952100       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0906 22:31:15.952130       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0906 22:31:15.952233       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0906 22:31:15.952331       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:31:15.952267       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:31:15.952381       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:31:15.952434       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:31:15.952245       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:31:16.052376       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0906 22:31:16.052503       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:31:16.052648       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:30:29 UTC, end at Tue 2022-09-06 22:31:22 UTC. --
	Sep 06 22:31:13 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:13.933882    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.034530    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.135609    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.235851    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.337111    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.438319    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.538479    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.639340    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.740222    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.841303    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:14 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:14.942382    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.042911    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.143923    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.244071    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.344958    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.446106    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.546775    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.647734    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.748338    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:15 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:15.849114    3655 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-20220906152610-22187\" not found"
	Sep 06 22:31:16 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: I0906 22:31:16.023524    3655 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220906152610-22187"
	Sep 06 22:31:16 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: I0906 22:31:16.023701    3655 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220906152610-22187"
	Sep 06 22:31:16 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: I0906 22:31:16.214481    3655 apiserver.go:52] "Watching apiserver"
	Sep 06 22:31:16 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: I0906 22:31:16.252193    3655 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:31:16 kubernetes-upgrade-20220906152610-22187 kubelet[3655]: E0906 22:31:16.817805    3655 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-20220906152610-22187\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-20220906152610-22187"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220906152610-22187 -n kubernetes-upgrade-20220906152610-22187
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220906152610-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220906152610-22187 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220906152610-22187 describe pod storage-provisioner: exit status 1 (49.887954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220906152610-22187 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220906152610-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220906152610-22187

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220906152610-22187: (2.84650974s)
--- FAIL: TestKubernetesUpgrade (315.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (70.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker 
E0906 15:25:50.127931   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker : exit status 78 (47.081032686s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20220906152523-22187] minikube v1.9.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220906152523-22187
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220906152523-22187" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 134.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 241.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 253.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 419.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 431.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 486.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 534.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create
: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:25:58.805088025 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220906152523-22187" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:26:09.253319772 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker : exit status 70 (12.724838885s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220906152523-22187] minikube v1.9.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220906152523-22187
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Updating the running docker "missing-upgrade-20220906152523-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.30 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 5.33 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 20.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 41.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 49.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 96.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 244.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 376.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 491.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1898865250.exe start -p missing-upgrade-20220906152523-22187 --memory=2200 --driver=docker : exit status 70 (4.240384351s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220906152523-22187] minikube v1.9.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220906152523-22187
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220906152523-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-09-06 15:26:31.164086 -0700 PDT m=+2556.489661675
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220906152523-22187
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220906152523-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0",
	        "Created": "2022-09-06T22:26:06.975136474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 131121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:26:07.193386055Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0/hosts",
	        "LogPath": "/var/lib/docker/containers/52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0/52213afa6e79c82001eb791f411a276b31ea780172b267eb048e6a9a3e8a9fc0-json.log",
	        "Name": "/missing-upgrade-20220906152523-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220906152523-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a48dd3ad67f611ec989d608c54afc1ac397ed7086c96a872f52b7901e7deb646-init/diff:/var/lib/docker/overlay2/4bcc8a2ebeec26fe77f5d52b8018a59d3e0a92757805287878e19b9524121dee/diff:/var/lib/docker/overlay2/36fc4a9399fbe3e3cee20c3c0bce2585043206983f214d5b89aa3269114bcbb2/diff:/var/lib/docker/overlay2/1e6255bdc9f01561a9772c464c1856682eab454eb4d93e0d98ef6338cfaaa3a3/diff:/var/lib/docker/overlay2/8205e05e02c7f1a01bb3162924c7c6851005b531f0ffa211af7ef2e636460df0/diff:/var/lib/docker/overlay2/51f1f9eb703b74b9d9197352989b984a3ed815f7c5960a2ecc84b3daad7daaca/diff:/var/lib/docker/overlay2/5ae9570f6dc344cdc352bfff39d75a4ae859199a98f372cdaa0502abf2e91e57/diff:/var/lib/docker/overlay2/fb92a82c3845b0e174133c26b284b3b3d3f6d016a68c6e4a8ca1017c777139ea/diff:/var/lib/docker/overlay2/28933cc22ec5056aec1614407fa7ccd844df051593e89525d4ff2e26944a5124/diff:/var/lib/docker/overlay2/d6c19ba19b6849bcd8b4bdaa37afa35f943eb4c4d1a2eb005c12e85b6f7de1ab/diff:/var/lib/docker/overlay2/d4b37e
003d7fa7d2e8a725c8e2e08e3701a91ffb7820f794af53c6012bee469f/diff:/var/lib/docker/overlay2/d67996e354c6052b529c519812512a911b818ee71efdfec8a38c2b7e2361b81c/diff:/var/lib/docker/overlay2/2bb4569be621a7154609a53d42d51043608a28b03cc74ba20bf14888bd2e26c4/diff:/var/lib/docker/overlay2/edb207de691d238ce84a95da5349cbdbb80b61a9575d95ade1d1416ceca92132/diff:/var/lib/docker/overlay2/50ba72a2463f1bd6435b48ef39abab87510d651e3519178b3428deddfe57eb7d/diff:/var/lib/docker/overlay2/be91accf7c79ac6368ad99ffba50aa047ede964894d31ac264317b1ae48c8a76/diff:/var/lib/docker/overlay2/5e1130fd476ebe81932fca2a567dc93e3aeaccf909f5a67741befb221e2ac990/diff:/var/lib/docker/overlay2/e605dbaa58f001b49c3fc79fdb124f069f666ad53cb61d92bfde06324430abe0/diff:/var/lib/docker/overlay2/b9177eb3db6cf8bcb4e76369bdb53732e41adcaaf31eb4c57f5acff05d9270fd/diff:/var/lib/docker/overlay2/38368a8a478a6c2c3ef0c46e0b5bd86883498da1beaf63d4d5e442f5d8bc067b/diff:/var/lib/docker/overlay2/a930e60ec726200d88722ae495a6b290141625d4dce1d61693fab2f6bcab042f/diff:/var/lib/d
ocker/overlay2/27aa1a22b914fcffdb24b1ad2ede608837762e9b22d4c8067256c39993583f6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a48dd3ad67f611ec989d608c54afc1ac397ed7086c96a872f52b7901e7deb646/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a48dd3ad67f611ec989d608c54afc1ac397ed7086c96a872f52b7901e7deb646/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a48dd3ad67f611ec989d608c54afc1ac397ed7086c96a872f52b7901e7deb646/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220906152523-22187",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220906152523-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220906152523-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220906152523-22187",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220906152523-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2fdb02984b7b49d79fff4dfee64a32fd7453e3240d8e8c1103af02fadd73ecf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57701"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57702"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57703"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f2fdb02984b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "bacb0144185118ea27f98f405f40e1885f40da6ba4c22caa3996a63c69c7c7be",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c81be874dd90e6337c4ce784b847206141dac04ab69ca89f7bdf5bce8ad92d19",
	                    "EndpointID": "bacb0144185118ea27f98f405f40e1885f40da6ba4c22caa3996a63c69c7c7be",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220906152523-22187 -n missing-upgrade-20220906152523-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220906152523-22187 -n missing-upgrade-20220906152523-22187: exit status 6 (433.383362ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:26:31.656206   31351 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220906152523-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220906152523-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220906152523-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220906152523-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220906152523-22187: (2.373975366s)
--- FAIL: TestMissingContainerUpgrade (70.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker : exit status 70 (34.641526685s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220906152634-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig2399452451
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:26:51.111398709 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220906152634-22187" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:27:07.132074958 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220906152634-22187", then "minikube start -p stopped-upgrade-20220906152634-22187 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 251.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 318.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 440.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 485.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 499.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:27:07.132074958 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker : exit status 70 (4.323429963s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220906152634-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig2629368524
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220906152634-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1334244525.exe start -p stopped-upgrade-20220906152634-22187 --memory=2200 --vm-driver=docker : exit status 70 (4.308190021s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220906152634-22187] minikube v1.9.0 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1170457436
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220906152634-22187" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220906152815-22187 --alsologtostderr -v=1 --driver=docker 
E0906 15:29:56.968877   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:56.975248   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:56.985395   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:57.006169   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:57.046740   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:57.127946   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:57.288838   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:57.609759   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:58.251403   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:29:59.533646   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:30:02.094084   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:30:07.214212   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220906152815-22187 --alsologtostderr -v=1 --driver=docker : (1m8.295631871s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-20220906152815-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node pause-20220906152815-22187 in cluster pause-20220906152815-22187
	* Pulling base image ...
	* Updating the running docker "pause-20220906152815-22187" container ...
	* Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-20220906152815-22187" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:28:59.603166   32195 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:28:59.603336   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603341   32195 out.go:309] Setting ErrFile to fd 2...
	I0906 15:28:59.603345   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603456   32195 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:28:59.603903   32195 out.go:303] Setting JSON to false
	I0906 15:28:59.619038   32195 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8910,"bootTime":1662494429,"procs":333,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:28:59.619144   32195 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:28:59.647374   32195 out.go:177] * [pause-20220906152815-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:28:59.688869   32195 notify.go:193] Checking for updates...
	I0906 15:28:59.709648   32195 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:28:59.730836   32195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:28:59.751688   32195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:28:59.772603   32195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:28:59.793912   32195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:28:59.815496   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:28:59.816152   32195 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:28:59.886077   32195 docker.go:137] docker version: linux-20.10.17
	I0906 15:28:59.886228   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.018367   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:28:59.960654625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.061749   32195 out.go:177] * Using the docker driver based on existing profile
	I0906 15:29:00.082772   32195 start.go:284] selected driver: docker
	I0906 15:29:00.082793   32195 start.go:808] validating driver "docker" against &{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.082915   32195 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:29:00.083047   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.214503   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:29:00.158060708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.216567   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:00.216586   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:00.216601   32195 start_flags.go:310] config:
	{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.238278   32195 out.go:177] * Starting control plane node pause-20220906152815-22187 in cluster pause-20220906152815-22187
	I0906 15:29:00.259224   32195 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:29:00.281291   32195 out.go:177] * Pulling base image ...
	I0906 15:29:00.323997   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:00.324000   32195 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:29:00.324090   32195 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:29:00.324109   32195 cache.go:57] Caching tarball of preloaded images
	I0906 15:29:00.324669   32195 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:29:00.324806   32195 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:29:00.325077   32195 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/config.json ...
	I0906 15:29:00.386985   32195 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:29:00.387002   32195 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:29:00.387013   32195 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:29:00.387060   32195 start.go:364] acquiring machines lock for pause-20220906152815-22187: {Name:mk4180017503fe44437ec5e270ffb6df449347ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:29:00.387152   32195 start.go:368] acquired machines lock for "pause-20220906152815-22187" in 75.414µs
	I0906 15:29:00.387173   32195 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:29:00.387184   32195 fix.go:55] fixHost starting: 
	I0906 15:29:00.387433   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:29:00.453101   32195 fix.go:103] recreateIfNeeded on pause-20220906152815-22187: state=Running err=<nil>
	W0906 15:29:00.453131   32195 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:29:00.474904   32195 out.go:177] * Updating the running docker "pause-20220906152815-22187" container ...
	I0906 15:29:00.516757   32195 machine.go:88] provisioning docker machine ...
	I0906 15:29:00.516816   32195 ubuntu.go:169] provisioning hostname "pause-20220906152815-22187"
	I0906 15:29:00.516938   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.593870   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.594076   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.594094   32195 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220906152815-22187 && echo "pause-20220906152815-22187" | sudo tee /etc/hostname
	I0906 15:29:00.714936   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220906152815-22187
	
	I0906 15:29:00.715006   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.779722   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.779866   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.779880   32195 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220906152815-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220906152815-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220906152815-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:29:00.892102   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:00.892138   32195 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:29:00.892169   32195 ubuntu.go:177] setting up certificates
	I0906 15:29:00.892186   32195 provision.go:83] configureAuth start
	I0906 15:29:00.892256   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:00.956093   32195 provision.go:138] copyHostCerts
	I0906 15:29:00.956278   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:29:00.956289   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:29:00.956389   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:29:00.956593   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:29:00.956603   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:29:00.956659   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:29:00.956797   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:29:00.956802   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:29:00.956860   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:29:00.957005   32195 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.pause-20220906152815-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220906152815-22187]
	I0906 15:29:01.118415   32195 provision.go:172] copyRemoteCerts
	I0906 15:29:01.118478   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:29:01.118520   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.188789   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:01.271983   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:29:01.288103   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:29:01.305007   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0906 15:29:01.321565   32195 provision.go:86] duration metric: configureAuth took 429.36012ms
	I0906 15:29:01.321580   32195 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:29:01.321709   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:29:01.321780   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.387789   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.387938   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.387950   32195 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:29:01.501299   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:29:01.501319   32195 ubuntu.go:71] root file system type: overlay
	I0906 15:29:01.501485   32195 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:29:01.501582   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.567705   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.567859   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.567921   32195 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:29:01.690705   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:29:01.690788   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.756058   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.756216   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.756229   32195 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:29:01.872574   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:01.872588   32195 machine.go:91] provisioned docker machine in 1.355807968s
	I0906 15:29:01.872598   32195 start.go:300] post-start starting for "pause-20220906152815-22187" (driver="docker")
	I0906 15:29:01.872603   32195 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:29:01.872682   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:29:01.872729   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.938042   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.021994   32195 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:29:02.025757   32195 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:29:02.025772   32195 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:29:02.025778   32195 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:29:02.025784   32195 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:29:02.025793   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:29:02.025908   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:29:02.026042   32195 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:29:02.026194   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:29:02.033762   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:02.053064   32195 start.go:303] post-start completed in 180.456324ms
	I0906 15:29:02.053151   32195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:29:02.053223   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.118998   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.199982   32195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:29:02.204552   32195 fix.go:57] fixHost completed within 1.817365407s
	I0906 15:29:02.204565   32195 start.go:83] releasing machines lock for "pause-20220906152815-22187", held for 1.817401211s
	I0906 15:29:02.204638   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:02.269753   32195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:29:02.269771   32195 ssh_runner.go:195] Run: systemctl --version
	I0906 15:29:02.269830   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.269844   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.338654   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.338703   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.465608   32195 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:29:02.475666   32195 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:29:02.475718   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:29:02.487657   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:29:02.500842   32195 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:29:02.593048   32195 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:29:02.671572   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:02.758296   32195 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:29:09.816595   32195 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.058266504s)
	I0906 15:29:09.816654   32195 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:29:09.937473   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:10.050359   32195 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:29:10.075080   32195 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:29:10.075165   32195 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:29:10.081157   32195 start.go:471] Will wait 60s for crictl version
	I0906 15:29:10.081228   32195 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:29:10.121153   32195 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:29:10.121228   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:10.197263   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:10.349185   32195 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:29:10.349272   32195 cli_runner.go:164] Run: docker exec -t pause-20220906152815-22187 dig +short host.docker.internal
	I0906 15:29:10.521205   32195 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:29:10.521330   32195 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:29:10.525445   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:10.594219   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:10.594284   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.630834   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.630851   32195 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:29:10.630919   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.705526   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.705552   32195 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:29:10.705630   32195 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:29:10.810881   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:10.810895   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:10.810916   32195 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:29:10.810943   32195 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220906152815-22187 NodeName:pause-20220906152815-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:29:10.811060   32195 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220906152815-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:29:10.811159   32195 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220906152815-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:29:10.811225   32195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:29:10.818935   32195 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:29:10.818998   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:29:10.825948   32195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0906 15:29:10.838545   32195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:29:10.851235   32195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0906 15:29:10.863636   32195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:29:10.867408   32195 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187 for IP: 192.168.76.2
	I0906 15:29:10.867527   32195 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:29:10.867587   32195 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:29:10.867673   32195 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.key
	I0906 15:29:10.867734   32195 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key.31bdca25
	I0906 15:29:10.867787   32195 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key
	I0906 15:29:10.868011   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:29:10.868048   32195 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:29:10.868057   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:29:10.868104   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:29:10.868136   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:29:10.868165   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:29:10.868240   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:10.868791   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:29:10.905280   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:29:10.933507   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:29:10.954052   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:29:10.992199   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:29:11.010534   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:29:11.027152   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:29:11.044585   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:29:11.065209   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:29:11.082218   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:29:11.099206   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:29:11.115721   32195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:29:11.128022   32195 ssh_runner.go:195] Run: openssl version
	I0906 15:29:11.133153   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:29:11.141435   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145438   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145479   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.150306   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:29:11.157884   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:29:11.165301   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169161   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169196   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.174366   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:29:11.182851   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:29:11.190783   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194947   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194982   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.200109   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:29:11.207381   32195 kubeadm.go:396] StartCluster: {Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:11.207477   32195 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:11.237437   32195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:29:11.244799   32195 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:29:11.244813   32195 kubeadm.go:627] restartCluster start
	I0906 15:29:11.244854   32195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:29:11.251631   32195 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.251692   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:11.316600   32195 kubeconfig.go:92] found "pause-20220906152815-22187" server: "https://127.0.0.1:57914"
	I0906 15:29:11.317012   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:29:11.317569   32195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:29:11.324940   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:11.324985   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:11.334422   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:11.344770   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.344785   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:16.347203   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:16.347284   32195 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0906 15:29:16.612497   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:21.615043   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:21.615078   32195 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0906 15:29:21.997851   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:27.000257   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:27.200414   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:27.200501   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:27.210198   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:27.217873   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:27.217884   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.050148   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.050200   32195 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0906 15:29:31.294534   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.297376   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.297408   32195 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0906 15:29:31.598448   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.600066   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.600082   32195 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0906 15:29:32.027578   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.029159   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.029180   32195 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0906 15:29:32.411742   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.414015   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.414042   32195 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0906 15:29:32.919955   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.921927   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.921954   32195 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0906 15:29:33.532401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:33.534785   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:33.534805   32195 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0906 15:29:34.395688   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:34.398122   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:34.398166   32195 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0906 15:29:35.599387   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:35.601019   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:35.601041   32195 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0906 15:29:37.327004   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:37.328789   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:37.328814   32195 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0906 15:29:38.925505   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:38.927803   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:38.927831   32195 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0906 15:29:41.119401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:41.121885   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:41.121915   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:41.121989   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:29:41.131832   32195 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:41.131845   32195 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:29:41.131853   32195 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:29:41.131907   32195 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:41.164251   32195 docker.go:443] Stopping containers: [e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9]
	I0906 15:29:41.164335   32195 ssh_runner.go:195] Run: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9
	I0906 15:29:46.311600   32195 ssh_runner.go:235] Completed: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9: (5.147237759s)
	I0906 15:29:46.311676   32195 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:29:46.346530   32195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:29:46.354429   32195 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2043 Sep  6 22:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:28 /etc/kubernetes/scheduler.conf
	
	I0906 15:29:46.354492   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:29:46.362206   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:29:46.370227   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.379342   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.379408   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.388502   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:29:46.396359   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.396413   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:29:46.403592   32195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411337   32195 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411354   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:46.474920   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.599906   32195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.124964786s)
	I0906 15:29:47.599921   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.748957   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.797960   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.892070   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:29:47.892139   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:47.902983   32195 api_server.go:71] duration metric: took 10.915563ms to wait for apiserver process to appear ...
	I0906 15:29:47.903008   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:29:47.903024   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:52.903579   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:53.403672   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.473315   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:29:53.473337   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:29:53.904223   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.910683   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:53.910696   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.403658   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.409021   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:54.409043   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.904615   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.911776   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:29:54.917907   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:29:54.917917   32195 api_server.go:130] duration metric: took 7.014902451s to wait for apiserver health ...
	I0906 15:29:54.917922   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:54.917929   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:54.917939   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:29:54.924269   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:29:54.924284   32195 system_pods.go:61] "coredns-565d847f94-tb4pk" [d1b33e2e-6f0b-4dc5-b778-0ed14d441d68] Running
	I0906 15:29:54.924288   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:29:54.924296   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:29:54.924304   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:29:54.924312   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:29:54.924320   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:29:54.924324   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:29:54.924328   32195 system_pods.go:74] duration metric: took 6.385314ms to wait for pod list to return data ...
	I0906 15:29:54.924347   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:29:54.927033   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:29:54.927048   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:29:54.927057   32195 node_conditions.go:105] duration metric: took 2.704771ms to run NodePressure ...
	I0906 15:29:54.927070   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:55.042701   32195 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046775   32195 kubeadm.go:778] kubelet initialised
	I0906 15:29:55.046787   32195 kubeadm.go:779] duration metric: took 4.072728ms waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046797   32195 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:29:55.052083   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057508   32195 pod_ready.go:92] pod "coredns-565d847f94-tb4pk" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.057517   32195 pod_ready.go:81] duration metric: took 5.421387ms waiting for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057523   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062455   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.062464   32195 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062470   32195 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:57.073968   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:29:59.076127   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:00.072041   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.072054   32195 pod_ready.go:81] duration metric: took 5.009578722s waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.072060   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076123   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.076131   32195 pod_ready.go:81] duration metric: took 4.066381ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076140   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088645   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.088658   32195 pod_ready.go:81] duration metric: took 1.012512942s waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088667   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092757   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.092765   32195 pod_ready.go:81] duration metric: took 4.093235ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092771   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830691   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.830703   32195 pod_ready.go:81] duration metric: took 737.926758ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830709   32195 pod_ready.go:38] duration metric: took 6.783902804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:01.830721   32195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:30:01.838005   32195 ops.go:34] apiserver oom_adj: -16
	I0906 15:30:01.838014   32195 kubeadm.go:631] restartCluster took 50.593170106s
	I0906 15:30:01.838022   32195 kubeadm.go:398] StartCluster complete in 50.630620083s
	I0906 15:30:01.838035   32195 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.838111   32195 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:30:01.838524   32195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.839343   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.842012   32195 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220906152815-22187" rescaled to 1
	I0906 15:30:01.842040   32195 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:30:01.842045   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:30:01.842080   32195 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0906 15:30:01.842183   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:30:01.864738   32195 out.go:177] * Verifying Kubernetes components...
	I0906 15:30:01.864871   32195 addons.go:65] Setting default-storageclass=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.864920   32195 addons.go:65] Setting storage-provisioner=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.885987   32195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220906152815-22187"
	I0906 15:30:01.886012   32195 addons.go:153] Setting addon storage-provisioner=true in "pause-20220906152815-22187"
	W0906 15:30:01.886021   32195 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:30:01.886032   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:01.886084   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:01.886351   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.887248   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.913914   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:01.913930   32195 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:30:01.966007   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.990691   32195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:30:01.994519   32195 addons.go:153] Setting addon default-storageclass=true in "pause-20220906152815-22187"
	W0906 15:30:02.010804   32195 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:30:02.010858   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:02.010910   32195 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.010924   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:30:02.011008   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.012004   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:02.020398   32195 node_ready.go:35] waiting up to 6m0s for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023954   32195 node_ready.go:49] node "pause-20220906152815-22187" has status "Ready":"True"
	I0906 15:30:02.023964   32195 node_ready.go:38] duration metric: took 3.545297ms waiting for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023970   32195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:02.082831   32195 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.082844   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:30:02.082908   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.082973   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.122659   32195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:02.148220   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.171051   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.237224   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.743659   32195 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:30:02.763901   32195 addons.go:414] enableAddons completed in 921.833683ms
	I0906 15:30:04.527847   32195 pod_ready.go:102] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:05.528041   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.528053   32195 pod_ready.go:81] duration metric: took 3.405376442s waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.528062   32195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532569   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.532578   32195 pod_ready.go:81] duration metric: took 4.510365ms waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532584   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722538   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.722547   32195 pod_ready.go:81] duration metric: took 189.959018ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722554   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121470   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.121479   32195 pod_ready.go:81] duration metric: took 398.921172ms waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121486   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521236   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.521246   32195 pod_ready.go:81] duration metric: took 399.75581ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521252   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923930   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.923940   32195 pod_ready.go:81] duration metric: took 402.683503ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923946   32195 pod_ready.go:38] duration metric: took 4.899969506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:06.923964   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:30:06.924012   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:30:06.933539   32195 api_server.go:71] duration metric: took 5.0914834s to wait for apiserver process to appear ...
	I0906 15:30:06.933557   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:30:06.933564   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:30:06.938987   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:30:06.940195   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:30:06.940204   32195 api_server.go:130] duration metric: took 6.642342ms to wait for apiserver health ...
	I0906 15:30:06.940208   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:30:07.123061   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:30:07.123076   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.123082   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.123086   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.123089   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.123093   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.123098   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.123101   32195 system_pods.go:61] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.123105   32195 system_pods.go:74] duration metric: took 182.893492ms to wait for pod list to return data ...
	I0906 15:30:07.123111   32195 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:30:07.321161   32195 default_sa.go:45] found service account: "default"
	I0906 15:30:07.321172   32195 default_sa.go:55] duration metric: took 198.057494ms for default service account to be created ...
	I0906 15:30:07.321177   32195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:30:07.522693   32195 system_pods.go:86] 7 kube-system pods found
	I0906 15:30:07.522706   32195 system_pods.go:89] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.522711   32195 system_pods.go:89] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.522714   32195 system_pods.go:89] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.522718   32195 system_pods.go:89] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.522722   32195 system_pods.go:89] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.522726   32195 system_pods.go:89] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.522731   32195 system_pods.go:89] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.522736   32195 system_pods.go:126] duration metric: took 201.555356ms to wait for k8s-apps to be running ...
	I0906 15:30:07.522741   32195 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:30:07.522790   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:07.532591   32195 system_svc.go:56] duration metric: took 9.845179ms WaitForService to wait for kubelet.
	I0906 15:30:07.532604   32195 kubeadm.go:573] duration metric: took 5.690549978s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:30:07.532618   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:30:07.721563   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:30:07.721574   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:30:07.721584   32195 node_conditions.go:105] duration metric: took 188.96208ms to run NodePressure ...
	I0906 15:30:07.721593   32195 start.go:216] waiting for startup goroutines ...
	I0906 15:30:07.755241   32195 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:30:07.779099   32195 out.go:177] * Done! kubectl is now configured to use "pause-20220906152815-22187" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220906152815-22187
helpers_test.go:235: (dbg) docker inspect pause-20220906152815-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8",
	        "Created": "2022-09-06T22:28:21.921289409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 139729,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:28:22.212627697Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/hosts",
	        "LogPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8-json.log",
	        "Name": "/pause-20220906152815-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220906152815-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220906152815-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20220906152815-22187",
	                "Source": "/var/lib/docker/volumes/pause-20220906152815-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220906152815-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220906152815-22187",
	                "name.minikube.sigs.k8s.io": "pause-20220906152815-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22f8009ce979665aedfc03290832762956fc09768abbe4eeb8ab6b04f0839f76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57913"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57914"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22f8009ce979",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220906152815-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2bc2a9a82758",
	                        "pause-20220906152815-22187"
	                    ],
	                    "NetworkID": "a4ace12f2e9e8745b1ce59d548a9ac43144f88a66c7a5065fbf1ce18381acfe6",
	                    "EndpointID": "f71107d096fc41ec7a9baff3fb644404751c6c9d3291376607a7573611ee6cfe",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220906152815-22187 -n pause-20220906152815-22187
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220906152815-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220906152815-22187 logs -n 25: (3.087141514s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	|  Command   |                   Args                    |                  Profile                  |   User   | Version |     Start Time      |      End Time       |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 5m                             |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 5m                             |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT | 06 Sep 22 15:22 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --cancel-scheduled                        |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT | 06 Sep 22 15:23 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| delete     | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	| start      | -p                                        | skaffold-20220906152410-22187             | jenkins  | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	|            | --memory=2600 --driver=docker             |                                           |          |         |                     |                     |
	| docker-env | --shell none -p                           | skaffold-20220906152410-22187             | skaffold | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	|            | --user=skaffold                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | skaffold-20220906152410-22187             | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	| start      | -p                                        | insufficient-storage-20220906152509-22187 | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT |                     |
	|            | insufficient-storage-20220906152509-22187 |                                           |          |         |                     |                     |
	|            | --memory=2048 --output=json --wait=true   |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | insufficient-storage-20220906152509-22187 | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | insufficient-storage-20220906152509-22187 |                                           |          |         |                     |                     |
	| start      | -p                                        | offline-docker-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:26 PDT |
	|            | offline-docker-20220906152522-22187       |                                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1                    |                                           |          |         |                     |                     |
	|            | --memory=2048 --wait=true                 |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | flannel-20220906152522-22187              | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | flannel-20220906152522-22187              |                                           |          |         |                     |                     |
	| delete     | -p                                        | custom-flannel-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | custom-flannel-20220906152522-22187       |                                           |          |         |                     |                     |
	| delete     | -p                                        | offline-docker-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|            | offline-docker-20220906152522-22187       |                                           |          |         |                     |                     |
	| start      | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT |                     |
	|            | kubernetes-upgrade-20220906152610-22187   |                                           |          |         |                     |                     |
	|            | --memory=2200                             |                                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0              |                                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1 --driver=docker    |                                           |          |         |                     |                     |
	| delete     | -p                                        | missing-upgrade-20220906152523-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|            | missing-upgrade-20220906152523-22187      |                                           |          |         |                     |                     |
	| delete     | -p                                        | stopped-upgrade-20220906152634-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:27 PDT | 06 Sep 22 15:27 PDT |
	|            | stopped-upgrade-20220906152634-22187      |                                           |          |         |                     |                     |
	| delete     | -p                                        | running-upgrade-20220906152727-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|            | running-upgrade-20220906152727-22187      |                                           |          |         |                     |                     |
	| start      | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|            | --memory=2048                             |                                           |          |         |                     |                     |
	|            | --install-addons=false                    |                                           |          |         |                     |                     |
	|            | --wait=all --driver=docker                |                                           |          |         |                     |                     |
	| start      | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:30 PDT |
	|            | --alsologtostderr -v=1                    |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:28:59
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:28:59.603166   32195 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:28:59.603336   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603341   32195 out.go:309] Setting ErrFile to fd 2...
	I0906 15:28:59.603345   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603456   32195 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:28:59.603903   32195 out.go:303] Setting JSON to false
	I0906 15:28:59.619038   32195 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8910,"bootTime":1662494429,"procs":333,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:28:59.619144   32195 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:28:59.647374   32195 out.go:177] * [pause-20220906152815-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:28:59.688869   32195 notify.go:193] Checking for updates...
	I0906 15:28:59.709648   32195 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:28:59.730836   32195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:28:59.751688   32195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:28:59.772603   32195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:28:59.793912   32195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:28:59.815496   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:28:59.816152   32195 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:28:59.886077   32195 docker.go:137] docker version: linux-20.10.17
	I0906 15:28:59.886228   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.018367   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:28:59.960654625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.061749   32195 out.go:177] * Using the docker driver based on existing profile
	I0906 15:29:00.082772   32195 start.go:284] selected driver: docker
	I0906 15:29:00.082793   32195 start.go:808] validating driver "docker" against &{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.082915   32195 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:29:00.083047   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.214503   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:29:00.158060708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.216567   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:00.216586   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:00.216601   32195 start_flags.go:310] config:
	{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.238278   32195 out.go:177] * Starting control plane node pause-20220906152815-22187 in cluster pause-20220906152815-22187
	I0906 15:29:00.259224   32195 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:29:00.281291   32195 out.go:177] * Pulling base image ...
	I0906 15:29:00.323997   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:00.324000   32195 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:29:00.324090   32195 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:29:00.324109   32195 cache.go:57] Caching tarball of preloaded images
	I0906 15:29:00.324669   32195 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:29:00.324806   32195 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:29:00.325077   32195 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/config.json ...
	I0906 15:29:00.386985   32195 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:29:00.387002   32195 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:29:00.387013   32195 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:29:00.387060   32195 start.go:364] acquiring machines lock for pause-20220906152815-22187: {Name:mk4180017503fe44437ec5e270ffb6df449347ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:29:00.387152   32195 start.go:368] acquired machines lock for "pause-20220906152815-22187" in 75.414µs
	I0906 15:29:00.387173   32195 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:29:00.387184   32195 fix.go:55] fixHost starting: 
	I0906 15:29:00.387433   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:29:00.453101   32195 fix.go:103] recreateIfNeeded on pause-20220906152815-22187: state=Running err=<nil>
	W0906 15:29:00.453131   32195 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:29:00.474904   32195 out.go:177] * Updating the running docker "pause-20220906152815-22187" container ...
	I0906 15:29:00.516757   32195 machine.go:88] provisioning docker machine ...
	I0906 15:29:00.516816   32195 ubuntu.go:169] provisioning hostname "pause-20220906152815-22187"
	I0906 15:29:00.516938   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.593870   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.594076   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.594094   32195 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220906152815-22187 && echo "pause-20220906152815-22187" | sudo tee /etc/hostname
	I0906 15:29:00.714936   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220906152815-22187
	
	I0906 15:29:00.715006   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.779722   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.779866   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.779880   32195 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220906152815-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220906152815-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220906152815-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:29:00.892102   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:00.892138   32195 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:29:00.892169   32195 ubuntu.go:177] setting up certificates
	I0906 15:29:00.892186   32195 provision.go:83] configureAuth start
	I0906 15:29:00.892256   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:00.956093   32195 provision.go:138] copyHostCerts
	I0906 15:29:00.956278   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:29:00.956289   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:29:00.956389   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:29:00.956593   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:29:00.956603   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:29:00.956659   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:29:00.956797   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:29:00.956802   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:29:00.956860   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:29:00.957005   32195 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.pause-20220906152815-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220906152815-22187]
	I0906 15:29:01.118415   32195 provision.go:172] copyRemoteCerts
	I0906 15:29:01.118478   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:29:01.118520   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.188789   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:01.271983   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:29:01.288103   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:29:01.305007   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0906 15:29:01.321565   32195 provision.go:86] duration metric: configureAuth took 429.36012ms
	I0906 15:29:01.321580   32195 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:29:01.321709   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:29:01.321780   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.387789   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.387938   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.387950   32195 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:29:01.501299   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:29:01.501319   32195 ubuntu.go:71] root file system type: overlay
	I0906 15:29:01.501485   32195 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:29:01.501582   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.567705   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.567859   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.567921   32195 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:29:01.690705   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:29:01.690788   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.756058   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.756216   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.756229   32195 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:29:01.872574   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:01.872588   32195 machine.go:91] provisioned docker machine in 1.355807968s
	I0906 15:29:01.872598   32195 start.go:300] post-start starting for "pause-20220906152815-22187" (driver="docker")
	I0906 15:29:01.872603   32195 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:29:01.872682   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:29:01.872729   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.938042   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.021994   32195 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:29:02.025757   32195 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:29:02.025772   32195 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:29:02.025778   32195 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:29:02.025784   32195 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:29:02.025793   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:29:02.025908   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:29:02.026042   32195 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:29:02.026194   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:29:02.033762   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:02.053064   32195 start.go:303] post-start completed in 180.456324ms
	I0906 15:29:02.053151   32195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:29:02.053223   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.118998   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.199982   32195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:29:02.204552   32195 fix.go:57] fixHost completed within 1.817365407s
	I0906 15:29:02.204565   32195 start.go:83] releasing machines lock for "pause-20220906152815-22187", held for 1.817401211s
	I0906 15:29:02.204638   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:02.269753   32195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:29:02.269771   32195 ssh_runner.go:195] Run: systemctl --version
	I0906 15:29:02.269830   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.269844   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.338654   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.338703   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.465608   32195 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:29:02.475666   32195 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:29:02.475718   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:29:02.487657   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:29:02.500842   32195 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:29:02.593048   32195 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:29:02.671572   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:02.758296   32195 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:29:09.816595   32195 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.058266504s)
	I0906 15:29:09.816654   32195 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:29:09.937473   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:10.050359   32195 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:29:10.075080   32195 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:29:10.075165   32195 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:29:10.081157   32195 start.go:471] Will wait 60s for crictl version
	I0906 15:29:10.081228   32195 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:29:10.121153   32195 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:29:10.121228   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:10.197263   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:08.814065   31107 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:29:08.814802   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:08.815014   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:10.349185   32195 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:29:10.349272   32195 cli_runner.go:164] Run: docker exec -t pause-20220906152815-22187 dig +short host.docker.internal
	I0906 15:29:10.521205   32195 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:29:10.521330   32195 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:29:10.525445   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:10.594219   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:10.594284   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.630834   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.630851   32195 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:29:10.630919   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.705526   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.705552   32195 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:29:10.705630   32195 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:29:10.810881   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:10.810895   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:10.810916   32195 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:29:10.810943   32195 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220906152815-22187 NodeName:pause-20220906152815-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:29:10.811060   32195 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220906152815-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:29:10.811159   32195 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220906152815-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:29:10.811225   32195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:29:10.818935   32195 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:29:10.818998   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:29:10.825948   32195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0906 15:29:10.838545   32195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:29:10.851235   32195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0906 15:29:10.863636   32195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:29:10.867408   32195 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187 for IP: 192.168.76.2
	I0906 15:29:10.867527   32195 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:29:10.867587   32195 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:29:10.867673   32195 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.key
	I0906 15:29:10.867734   32195 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key.31bdca25
	I0906 15:29:10.867787   32195 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key
	I0906 15:29:10.868011   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:29:10.868048   32195 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:29:10.868057   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:29:10.868104   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:29:10.868136   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:29:10.868165   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:29:10.868240   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:10.868791   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:29:10.905280   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:29:10.933507   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:29:10.954052   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:29:10.992199   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:29:11.010534   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:29:11.027152   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:29:11.044585   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:29:11.065209   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:29:11.082218   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:29:11.099206   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:29:11.115721   32195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:29:11.128022   32195 ssh_runner.go:195] Run: openssl version
	I0906 15:29:11.133153   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:29:11.141435   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145438   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145479   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.150306   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:29:11.157884   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:29:11.165301   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169161   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169196   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.174366   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:29:11.182851   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:29:11.190783   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194947   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194982   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.200109   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:29:11.207381   32195 kubeadm.go:396] StartCluster: {Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:11.207477   32195 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:11.237437   32195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:29:11.244799   32195 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:29:11.244813   32195 kubeadm.go:627] restartCluster start
	I0906 15:29:11.244854   32195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:29:11.251631   32195 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.251692   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:11.316600   32195 kubeconfig.go:92] found "pause-20220906152815-22187" server: "https://127.0.0.1:57914"
	I0906 15:29:11.317012   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:29:11.317569   32195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:29:11.324940   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:11.324985   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:11.334422   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:11.344770   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.344785   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:13.812888   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:13.813101   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:16.347203   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:16.347284   32195 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0906 15:29:16.612497   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:21.615043   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:21.615078   32195 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0906 15:29:21.997851   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:23.806911   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:23.807141   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:27.000257   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:27.200414   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:27.200501   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:27.210198   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:27.217873   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:27.217884   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.050148   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.050200   32195 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0906 15:29:31.294534   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.297376   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.297408   32195 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0906 15:29:31.598448   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.600066   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.600082   32195 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0906 15:29:32.027578   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.029159   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.029180   32195 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0906 15:29:32.411742   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.414015   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.414042   32195 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0906 15:29:32.919955   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.921927   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.921954   32195 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0906 15:29:33.532401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:33.534785   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:33.534805   32195 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0906 15:29:34.395688   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:34.398122   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:34.398166   32195 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0906 15:29:35.599387   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:35.601019   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:35.601041   32195 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0906 15:29:37.327004   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:37.328789   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:37.328814   32195 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0906 15:29:38.925505   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:38.927803   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:38.927831   32195 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0906 15:29:41.119401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:41.121885   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:41.121915   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:41.121989   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:29:41.131832   32195 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:41.131845   32195 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:29:41.131853   32195 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:29:41.131907   32195 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:41.164251   32195 docker.go:443] Stopping containers: [e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9]
	I0906 15:29:41.164335   32195 ssh_runner.go:195] Run: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9
	I0906 15:29:43.793507   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:43.793709   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:46.311600   32195 ssh_runner.go:235] Completed: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9: (5.147237759s)
	I0906 15:29:46.311676   32195 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:29:46.346530   32195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:29:46.354429   32195 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2043 Sep  6 22:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:28 /etc/kubernetes/scheduler.conf
	
	I0906 15:29:46.354492   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:29:46.362206   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:29:46.370227   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.379342   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.379408   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.388502   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:29:46.396359   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.396413   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:29:46.403592   32195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411337   32195 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411354   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:46.474920   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.599906   32195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.124964786s)
	I0906 15:29:47.599921   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.748957   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.797960   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.892070   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:29:47.892139   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:47.902983   32195 api_server.go:71] duration metric: took 10.915563ms to wait for apiserver process to appear ...
	I0906 15:29:47.903008   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:29:47.903024   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:52.903579   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:53.403672   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.473315   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:29:53.473337   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:29:53.904223   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.910683   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:53.910696   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.403658   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.409021   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:54.409043   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.904615   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.911776   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:29:54.917907   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:29:54.917917   32195 api_server.go:130] duration metric: took 7.014902451s to wait for apiserver health ...
	I0906 15:29:54.917922   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:54.917929   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:54.917939   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:29:54.924269   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:29:54.924284   32195 system_pods.go:61] "coredns-565d847f94-tb4pk" [d1b33e2e-6f0b-4dc5-b778-0ed14d441d68] Running
	I0906 15:29:54.924288   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:29:54.924296   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:29:54.924304   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:29:54.924312   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:29:54.924320   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:29:54.924324   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:29:54.924328   32195 system_pods.go:74] duration metric: took 6.385314ms to wait for pod list to return data ...
	I0906 15:29:54.924347   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:29:54.927033   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:29:54.927048   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:29:54.927057   32195 node_conditions.go:105] duration metric: took 2.704771ms to run NodePressure ...
	I0906 15:29:54.927070   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:55.042701   32195 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046775   32195 kubeadm.go:778] kubelet initialised
	I0906 15:29:55.046787   32195 kubeadm.go:779] duration metric: took 4.072728ms waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046797   32195 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:29:55.052083   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057508   32195 pod_ready.go:92] pod "coredns-565d847f94-tb4pk" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.057517   32195 pod_ready.go:81] duration metric: took 5.421387ms waiting for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057523   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062455   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.062464   32195 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062470   32195 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:57.073968   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:29:59.076127   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:00.072041   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.072054   32195 pod_ready.go:81] duration metric: took 5.009578722s waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.072060   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076123   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.076131   32195 pod_ready.go:81] duration metric: took 4.066381ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076140   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088645   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.088658   32195 pod_ready.go:81] duration metric: took 1.012512942s waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088667   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092757   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.092765   32195 pod_ready.go:81] duration metric: took 4.093235ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092771   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830691   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.830703   32195 pod_ready.go:81] duration metric: took 737.926758ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830709   32195 pod_ready.go:38] duration metric: took 6.783902804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:01.830721   32195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:30:01.838005   32195 ops.go:34] apiserver oom_adj: -16
	I0906 15:30:01.838014   32195 kubeadm.go:631] restartCluster took 50.593170106s
	I0906 15:30:01.838022   32195 kubeadm.go:398] StartCluster complete in 50.630620083s
	I0906 15:30:01.838035   32195 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.838111   32195 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:30:01.838524   32195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.839343   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.842012   32195 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220906152815-22187" rescaled to 1
	I0906 15:30:01.842040   32195 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:30:01.842045   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:30:01.842080   32195 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0906 15:30:01.842183   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:30:01.864738   32195 out.go:177] * Verifying Kubernetes components...
	I0906 15:30:01.864871   32195 addons.go:65] Setting default-storageclass=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.864920   32195 addons.go:65] Setting storage-provisioner=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.885987   32195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220906152815-22187"
	I0906 15:30:01.886012   32195 addons.go:153] Setting addon storage-provisioner=true in "pause-20220906152815-22187"
	W0906 15:30:01.886021   32195 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:30:01.886032   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:01.886084   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:01.886351   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.887248   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.913914   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:01.913930   32195 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:30:01.966007   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.990691   32195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:30:01.994519   32195 addons.go:153] Setting addon default-storageclass=true in "pause-20220906152815-22187"
	W0906 15:30:02.010804   32195 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:30:02.010858   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:02.010910   32195 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.010924   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:30:02.011008   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.012004   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:02.020398   32195 node_ready.go:35] waiting up to 6m0s for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023954   32195 node_ready.go:49] node "pause-20220906152815-22187" has status "Ready":"True"
	I0906 15:30:02.023964   32195 node_ready.go:38] duration metric: took 3.545297ms waiting for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023970   32195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:02.082831   32195 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.082844   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:30:02.082908   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.082973   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.122659   32195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:02.148220   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.171051   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.237224   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.743659   32195 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:30:02.763901   32195 addons.go:414] enableAddons completed in 921.833683ms
	I0906 15:30:04.527847   32195 pod_ready.go:102] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:05.528041   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.528053   32195 pod_ready.go:81] duration metric: took 3.405376442s waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.528062   32195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532569   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.532578   32195 pod_ready.go:81] duration metric: took 4.510365ms waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532584   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722538   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.722547   32195 pod_ready.go:81] duration metric: took 189.959018ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722554   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121470   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.121479   32195 pod_ready.go:81] duration metric: took 398.921172ms waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121486   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521236   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.521246   32195 pod_ready.go:81] duration metric: took 399.75581ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521252   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923930   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.923940   32195 pod_ready.go:81] duration metric: took 402.683503ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923946   32195 pod_ready.go:38] duration metric: took 4.899969506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:06.923964   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:30:06.924012   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:30:06.933539   32195 api_server.go:71] duration metric: took 5.0914834s to wait for apiserver process to appear ...
	I0906 15:30:06.933557   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:30:06.933564   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:30:06.938987   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:30:06.940195   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:30:06.940204   32195 api_server.go:130] duration metric: took 6.642342ms to wait for apiserver health ...
	I0906 15:30:06.940208   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:30:07.123061   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:30:07.123076   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.123082   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.123086   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.123089   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.123093   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.123098   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.123101   32195 system_pods.go:61] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.123105   32195 system_pods.go:74] duration metric: took 182.893492ms to wait for pod list to return data ...
	I0906 15:30:07.123111   32195 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:30:07.321161   32195 default_sa.go:45] found service account: "default"
	I0906 15:30:07.321172   32195 default_sa.go:55] duration metric: took 198.057494ms for default service account to be created ...
	I0906 15:30:07.321177   32195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:30:07.522693   32195 system_pods.go:86] 7 kube-system pods found
	I0906 15:30:07.522706   32195 system_pods.go:89] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.522711   32195 system_pods.go:89] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.522714   32195 system_pods.go:89] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.522718   32195 system_pods.go:89] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.522722   32195 system_pods.go:89] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.522726   32195 system_pods.go:89] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.522731   32195 system_pods.go:89] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.522736   32195 system_pods.go:126] duration metric: took 201.555356ms to wait for k8s-apps to be running ...
	I0906 15:30:07.522741   32195 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:30:07.522790   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:07.532591   32195 system_svc.go:56] duration metric: took 9.845179ms WaitForService to wait for kubelet.
	I0906 15:30:07.532604   32195 kubeadm.go:573] duration metric: took 5.690549978s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:30:07.532618   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:30:07.721563   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:30:07.721574   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:30:07.721584   32195 node_conditions.go:105] duration metric: took 188.96208ms to run NodePressure ...
	I0906 15:30:07.721593   32195 start.go:216] waiting for startup goroutines ...
	I0906 15:30:07.755241   32195 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:30:07.779099   32195 out.go:177] * Done! kubectl is now configured to use "pause-20220906152815-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:28:22 UTC, end at Tue 2022-09-06 22:30:09 UTC. --
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.526202416Z" level=info msg="Removing stale sandbox d605f169db0b0560557e19d6b99952df24e6e8237f7a2319b27e4612f0daac56 (c383af658559637a16b100fc863281140e8f58f7f428205c5a83e052d617369f)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.527484748Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 67074503aebcde4977892e78533c054ea32e26b40c32339645a722ce23788480], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.611028786Z" level=info msg="Removing stale sandbox db5560f36934a6b9ea8ebf6e7d4c422ddabe51368227f146ac4773fe65a86f23 (4ea0eac22d199b65bb325f10118ecfe0f5a9c3c5a56f62654129e3671a9c1312)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.645174784Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 87893f8e0000d02925d70b39992410d1ab2ded1d8121a9754f56aaeaeb33b72d], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.732555979Z" level=info msg="Removing stale sandbox f5b0c7b703a41fbcd48d230de7b3ec93c84c812dd5f485a070f5344fb7ea4a27 (1a262e52b8a665d2790c3baf05e276e7deb95a5795761984377a828ef721c2e1)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.733829489Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 4d51000e42546b77d2194111203528e937ea8aca206c241f7bec1e745aeae995], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.756304371Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.791309010Z" level=info msg="Loading containers: done."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.799785218Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.799852952Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:29:09 pause-20220906152815-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.821896749Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.823485233Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:29:31 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:31.085187184Z" level=info msg="ignoring event" container=247a34f754110bb2df9dc606061e6222051a04d25c3e8a8478c13e4d1d5005b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.297725215Z" level=info msg="ignoring event" container=3d9be5d4c24228d130ae6ee681a725ba0558416924362a619de554f964be4051 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.297751425Z" level=info msg="ignoring event" container=db12d18fbcf9ed49dff15cf7048ba3d53a00f653c4b0398e2ef265fbfc19d063 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.306292054Z" level=info msg="ignoring event" container=a8b84e6776d87caa43bfb62847b43e04b8e76e62b231ffea4b3f9e86c76df56d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.379913717Z" level=info msg="ignoring event" container=b733656d86e9e02119bf93a0923ebd696c84ef80323a941f22e86e28aa36c585 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.383999115Z" level=info msg="ignoring event" container=9cf4eeff8d9390aad5462175031c996a1928300220131936ca7741d5d7d4376e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.388942245Z" level=info msg="ignoring event" container=3865f67b411a8d5685f9868dba40b6a33b54d700eb5f985a85ac421373615328 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.392438078Z" level=info msg="ignoring event" container=4e466b02eec89ef02aaf2e9cf42bd1289b7b7edc8f3b3d673f12c934e96c9641 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.401470365Z" level=info msg="ignoring event" container=e151952fd65e57ceb7cd70b867433f1a712320eca4d170d65c1f8d4295e10e13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.401495607Z" level=info msg="ignoring event" container=f11659194c9b17223375312f27119dfa16c17d863e19b20c007a6c27a4d66e5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.404829353Z" level=info msg="ignoring event" container=dc28e720c942fc8c611cc2b377de251232abc9ded9d164db61a8a6ea1d3b3952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:46 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:46.296893503Z" level=info msg="ignoring event" container=e3840ce5a18ea72200453d44c62dddcec33fbda9af162f9d4a08ae58acff60ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	213817229cb26       6e38f40d628db       6 seconds ago       Running             storage-provisioner       0                   7c4fd6c34b2e2
	41d900adb9a39       5185b96f0becf       14 seconds ago      Running             coredns                   2                   3b13571bea99c
	b642d541a180c       58a9a0c6d96f2       14 seconds ago      Running             kube-proxy                2                   4333e8ba35ce2
	674d07e8e349b       a8a176a5d5d69       21 seconds ago      Running             etcd                      3                   934a24d031fdc
	3d38353ac93bb       bef2cf3115095       21 seconds ago      Running             kube-scheduler            3                   47f2f324bd0dc
	fcb0e8f19aa97       1a54c86c03a67       21 seconds ago      Running             kube-controller-manager   3                   1d862e8b756ab
	bbaebf830e5e2       4d2edfd10d3e3       27 seconds ago      Running             kube-apiserver            3                   a7d1629428e0a
	e151952fd65e5       bef2cf3115095       38 seconds ago      Exited              kube-scheduler            2                   9cf4eeff8d939
	dc28e720c942f       1a54c86c03a67       41 seconds ago      Exited              kube-controller-manager   2                   3865f67b411a8
	f11659194c9b1       a8a176a5d5d69       42 seconds ago      Exited              etcd                      2                   a8b84e6776d87
	3d9be5d4c2422       58a9a0c6d96f2       49 seconds ago      Exited              kube-proxy                1                   b733656d86e9e
	e3840ce5a18ea       5185b96f0becf       59 seconds ago      Exited              coredns                   1                   4e466b02eec89
	247a34f754110       4d2edfd10d3e3       59 seconds ago      Exited              kube-apiserver            2                   db12d18fbcf9e
	
	* 
	* ==> coredns [41d900adb9a3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [e3840ce5a18e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 172.17.0.2:33130->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220906152815-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220906152815-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=pause-20220906152815-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_28_41_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:28:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220906152815-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:30:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-20220906152815-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                91e1b8ff-a171-4174-a0b0-f45dc94c7cd7
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-xxcwh                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     75s
	  kube-system                 etcd-pause-20220906152815-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-pause-20220906152815-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-pause-20220906152815-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-6sj24                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-pause-20220906152815-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             88s                kubelet          Node pause-20220906152815-22187 status is now: NodeNotReady
	  Normal  NodeReady                78s                kubelet          Node pause-20220906152815-22187 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node pause-20220906152815-22187 event: Registered Node pause-20220906152815-22187 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x9 over 22s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x7 over 22s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node pause-20220906152815-22187 event: Registered Node pause-20220906152815-22187 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001536] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001105] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001751] FS-Cache: N-cookie d=000000006f57a5f8 n=0000000004119ae2
	[  +0.001424] FS-Cache: N-key=[8] '89c5800300000000'
	[  +0.002109] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000d596ead8 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001797] FS-Cache: O-cookie d=000000006f57a5f8 n=00000000f83b458d
	[  +0.001466] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001134] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001810] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001458] FS-Cache: N-key=[8] '89c5800300000000'
	[  +3.680989] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=000000003a8c8805 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000057637cac
	[  +0.001460] FS-Cache: O-key=[8] '88c5800300000000'
	[  +0.001144] FS-Cache: N-cookie c=000000000ab19587 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001761] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001454] FS-Cache: N-key=[8] '88c5800300000000'
	[  +0.676412] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000dd15d770 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000060e892c8
	[  +0.001441] FS-Cache: O-key=[8] '93c5800300000000'
	[  +0.001122] FS-Cache: N-cookie c=00000000e728d4f6 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=000000006f57a5f8 n=000000009b87565f
	[  +0.001438] FS-Cache: N-key=[8] '93c5800300000000'
	
	* 
	* ==> etcd [674d07e8e349] <==
	* {"level":"info","ts":"2022-09-06T22:29:48.579Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220906152815-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:29:50.427Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [f11659194c9b] <==
	* {"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220906152815-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:41.286Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:29:41.286Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220906152815-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/09/06 22:29:41 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:29:41 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:29:41.289Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-09-06T22:29:41.291Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:41.292Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:41.292Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220906152815-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:30:10 up 46 min,  0 users,  load average: 0.83, 0.76, 0.62
	Linux pause-20220906152815-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [247a34f75411] <==
	* W0906 22:29:20.912046       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:29:21.699298       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:29:26.116297       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0906 22:29:31.061587       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [bbaebf830e5e] <==
	* I0906 22:29:53.396280       1 controller.go:85] Starting OpenAPI V3 controller
	I0906 22:29:53.396288       1 naming_controller.go:291] Starting NamingConditionController
	I0906 22:29:53.396308       1 establishing_controller.go:76] Starting EstablishingController
	I0906 22:29:53.396334       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 22:29:53.396345       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 22:29:53.396352       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 22:29:53.482259       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:29:53.482475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 22:29:53.483537       1 cache.go:39] Caches are synced for autoregister controller
	I0906 22:29:53.495248       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:29:53.496346       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 22:29:53.575264       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:29:53.582016       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 22:29:53.582111       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:29:53.583603       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:29:53.587525       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:29:54.205424       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:29:54.384282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:29:55.002613       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:29:55.008805       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:29:55.026400       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:29:55.039815       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:29:55.045517       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:30:02.723179       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:30:06.589542       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [dc28e720c942] <==
	* I0906 22:29:29.194529       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:29:29.945571       1 controllermanager.go:178] Version: v1.25.0
	I0906 22:29:29.945613       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:29.946384       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 22:29:29.946410       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 22:29:29.946555       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:29:29.946650       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [fcb0e8f19aa9] <==
	* I0906 22:30:06.406109       1 shared_informer.go:262] Caches are synced for TTL
	I0906 22:30:06.408485       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0906 22:30:06.411218       1 shared_informer.go:262] Caches are synced for node
	I0906 22:30:06.411247       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:30:06.411251       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:30:06.411257       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:30:06.413887       1 shared_informer.go:262] Caches are synced for PVC protection
	I0906 22:30:06.414014       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0906 22:30:06.415111       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:30:06.416034       1 shared_informer.go:262] Caches are synced for ephemeral
	I0906 22:30:06.417538       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0906 22:30:06.419039       1 shared_informer.go:262] Caches are synced for job
	I0906 22:30:06.421848       1 shared_informer.go:262] Caches are synced for disruption
	I0906 22:30:06.423029       1 shared_informer.go:262] Caches are synced for cronjob
	I0906 22:30:06.501991       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 22:30:06.511190       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:30:06.512034       1 shared_informer.go:262] Caches are synced for PV protection
	I0906 22:30:06.527894       1 shared_informer.go:262] Caches are synced for expand
	I0906 22:30:06.554682       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:30:06.582349       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 22:30:06.594201       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:30:06.597666       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:30:06.912443       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:30:06.981112       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:30:06.981158       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [3d9be5d4c242] <==
	* E0906 22:29:30.624644       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": net/http: TLS handshake timeout
	E0906 22:29:31.753675       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:33.812471       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:38.154090       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [b642d541a180] <==
	* I0906 22:29:55.156800       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:29:55.156866       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:29:55.156880       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:29:55.176646       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:29:55.176692       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:29:55.176701       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:29:55.176809       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:29:55.176851       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:29:55.176967       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:29:55.177243       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:29:55.177269       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:55.177763       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:29:55.177788       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:29:55.177804       1 config.go:317] "Starting service config controller"
	I0906 22:29:55.177821       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:29:55.178418       1 config.go:444] "Starting node config controller"
	I0906 22:29:55.178634       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:29:55.278024       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:29:55.278064       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:29:55.279301       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d38353ac93b] <==
	* I0906 22:29:48.785814       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:29:53.501861       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:29:53.501897       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:53.508019       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:29:53.508100       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0906 22:29:53.508128       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0906 22:29:53.508147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:29:53.510408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:29:53.510450       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:29:53.510469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0906 22:29:53.510472       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:29:53.608458       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0906 22:29:53.610990       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:29:53.611062       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e151952fd65e] <==
	* W0906 22:29:39.011592       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.011678       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.013443       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.013508       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.124319       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.124382       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.124342       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.124407       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.409414       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.409532       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.027851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.027899       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.129992       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.130035       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.865928       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.866014       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.919018       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.919076       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.945455       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.945510       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.966721       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.966800       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:41.201649       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:41.201692       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:41.281602       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:28:22 UTC, end at Tue 2022-09-06 22:30:11 UTC. --
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: E0906 22:29:53.287835    6070 kubelet.go:2448] "Error getting node" err="node \"pause-20220906152815-22187\" not found"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: E0906 22:29:53.388496    6070 kubelet.go:2448] "Error getting node" err="node \"pause-20220906152815-22187\" not found"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.489137    6070 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.489664    6070 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.576502    6070 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220906152815-22187"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.576679    6070 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220906152815-22187"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.824452    6070 apiserver.go:52] "Watching apiserver"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827536    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827649    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827693    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919050    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9g2\" (UniqueName: \"kubernetes.io/projected/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-kube-api-access-2w9g2\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919099    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-lib-modules\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919122    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50ea7d09-4033-4175-9811-a28207750f60-config-volume\") pod \"coredns-565d847f94-xxcwh\" (UID: \"50ea7d09-4033-4175-9811-a28207750f60\") " pod="kube-system/coredns-565d847f94-xxcwh"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919139    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-xtables-lock\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919195    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-kube-proxy\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919218    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chw5k\" (UniqueName: \"kubernetes.io/projected/50ea7d09-4033-4175-9811-a28207750f60-kube-api-access-chw5k\") pod \"coredns-565d847f94-xxcwh\" (UID: \"50ea7d09-4033-4175-9811-a28207750f60\") " pod="kube-system/coredns-565d847f94-xxcwh"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919226    6070 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.029349    6070 scope.go:115] "RemoveContainer" containerID="3d9be5d4c24228d130ae6ee681a725ba0558416924362a619de554f964be4051"
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.080660    6070 request.go:601] Waited for 1.058123734s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.329157    6070 scope.go:115] "RemoveContainer" containerID="e3840ce5a18ea72200453d44c62dddcec33fbda9af162f9d4a08ae58acff60ab"
	Sep 06 22:29:57 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:57.925866    6070 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1b33e2e-6f0b-4dc5-b778-0ed14d441d68 path="/var/lib/kubelet/pods/d1b33e2e-6f0b-4dc5-b778-0ed14d441d68/volumes"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.734083    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.892427    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpd6\" (UniqueName: \"kubernetes.io/projected/1076ba8f-0e79-4f3b-8128-739a0d0814b9-kube-api-access-lxpd6\") pod \"storage-provisioner\" (UID: \"1076ba8f-0e79-4f3b-8128-739a0d0814b9\") " pod="kube-system/storage-provisioner"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.892549    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1076ba8f-0e79-4f3b-8128-739a0d0814b9-tmp\") pod \"storage-provisioner\" (UID: \"1076ba8f-0e79-4f3b-8128-739a0d0814b9\") " pod="kube-system/storage-provisioner"
	Sep 06 22:30:05 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:05.246195    6070 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	
	* 
	* ==> storage-provisioner [213817229cb2] <==
	* I0906 22:30:03.232887       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:30:03.240863       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:30:03.240914       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:30:03.245913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:30:03.246197       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1fe8a74-cd3a-4c34-8785-5749bf60c74d", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759 became leader
	I0906 22:30:03.246432       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759!
	I0906 22:30:03.346988       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220906152815-22187 -n pause-20220906152815-22187
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220906152815-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220906152815-22187 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220906152815-22187 describe pod : exit status 1 (37.315601ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220906152815-22187 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220906152815-22187
helpers_test.go:235: (dbg) docker inspect pause-20220906152815-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8",
	        "Created": "2022-09-06T22:28:21.921289409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 139729,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:28:22.212627697Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/hosts",
	        "LogPath": "/var/lib/docker/containers/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8/2bc2a9a827589f6b485fb1ef322a28e8a306d24cc0161ea06ac7d8c6405d4cb8-json.log",
	        "Name": "/pause-20220906152815-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220906152815-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220906152815-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2536088c1da14752944a4677ad7016291804d6ad2c0b32ab67e73c66cdf8f6e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220906152815-22187",
	                "Source": "/var/lib/docker/volumes/pause-20220906152815-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220906152815-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220906152815-22187",
	                "name.minikube.sigs.k8s.io": "pause-20220906152815-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22f8009ce979665aedfc03290832762956fc09768abbe4eeb8ab6b04f0839f76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57913"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57914"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22f8009ce979",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220906152815-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2bc2a9a82758",
	                        "pause-20220906152815-22187"
	                    ],
	                    "NetworkID": "a4ace12f2e9e8745b1ce59d548a9ac43144f88a66c7a5065fbf1ce18381acfe6",
	                    "EndpointID": "f71107d096fc41ec7a9baff3fb644404751c6c9d3291376607a7573611ee6cfe",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220906152815-22187 -n pause-20220906152815-22187
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220906152815-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220906152815-22187 logs -n 25: (3.07970996s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	|  Command   |                   Args                    |                  Profile                  |   User   | Version |     Start Time      |      End Time       |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 5m                             |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 5m                             |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:22 PDT | 06 Sep 22 15:22 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --cancel-scheduled                        |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT |                     |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| stop       | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:23 PDT | 06 Sep 22 15:23 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	|            | --schedule 15s                            |                                           |          |         |                     |                     |
	| delete     | -p                                        | scheduled-stop-20220906152228-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | scheduled-stop-20220906152228-22187       |                                           |          |         |                     |                     |
	| start      | -p                                        | skaffold-20220906152410-22187             | jenkins  | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	|            | --memory=2600 --driver=docker             |                                           |          |         |                     |                     |
	| docker-env | --shell none -p                           | skaffold-20220906152410-22187             | skaffold | v1.26.1 | 06 Sep 22 15:24 PDT | 06 Sep 22 15:24 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	|            | --user=skaffold                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | skaffold-20220906152410-22187             | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | skaffold-20220906152410-22187             |                                           |          |         |                     |                     |
	| start      | -p                                        | insufficient-storage-20220906152509-22187 | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT |                     |
	|            | insufficient-storage-20220906152509-22187 |                                           |          |         |                     |                     |
	|            | --memory=2048 --output=json --wait=true   |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | insufficient-storage-20220906152509-22187 | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | insufficient-storage-20220906152509-22187 |                                           |          |         |                     |                     |
	| start      | -p                                        | offline-docker-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:26 PDT |
	|            | offline-docker-20220906152522-22187       |                                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1                    |                                           |          |         |                     |                     |
	|            | --memory=2048 --wait=true                 |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	| delete     | -p                                        | flannel-20220906152522-22187              | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | flannel-20220906152522-22187              |                                           |          |         |                     |                     |
	| delete     | -p                                        | custom-flannel-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:25 PDT | 06 Sep 22 15:25 PDT |
	|            | custom-flannel-20220906152522-22187       |                                           |          |         |                     |                     |
	| delete     | -p                                        | offline-docker-20220906152522-22187       | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|            | offline-docker-20220906152522-22187       |                                           |          |         |                     |                     |
	| start      | -p                                        | kubernetes-upgrade-20220906152610-22187   | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT |                     |
	|            | kubernetes-upgrade-20220906152610-22187   |                                           |          |         |                     |                     |
	|            | --memory=2200                             |                                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0              |                                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1 --driver=docker    |                                           |          |         |                     |                     |
	| delete     | -p                                        | missing-upgrade-20220906152523-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:26 PDT | 06 Sep 22 15:26 PDT |
	|            | missing-upgrade-20220906152523-22187      |                                           |          |         |                     |                     |
	| delete     | -p                                        | stopped-upgrade-20220906152634-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:27 PDT | 06 Sep 22 15:27 PDT |
	|            | stopped-upgrade-20220906152634-22187      |                                           |          |         |                     |                     |
	| delete     | -p                                        | running-upgrade-20220906152727-22187      | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|            | running-upgrade-20220906152727-22187      |                                           |          |         |                     |                     |
	| start      | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:28 PDT |
	|            | --memory=2048                             |                                           |          |         |                     |                     |
	|            | --install-addons=false                    |                                           |          |         |                     |                     |
	|            | --wait=all --driver=docker                |                                           |          |         |                     |                     |
	| start      | -p pause-20220906152815-22187             | pause-20220906152815-22187                | jenkins  | v1.26.1 | 06 Sep 22 15:28 PDT | 06 Sep 22 15:30 PDT |
	|            | --alsologtostderr -v=1                    |                                           |          |         |                     |                     |
	|            | --driver=docker                           |                                           |          |         |                     |                     |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:28:59
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:28:59.603166   32195 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:28:59.603336   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603341   32195 out.go:309] Setting ErrFile to fd 2...
	I0906 15:28:59.603345   32195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:28:59.603456   32195 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:28:59.603903   32195 out.go:303] Setting JSON to false
	I0906 15:28:59.619038   32195 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8910,"bootTime":1662494429,"procs":333,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:28:59.619144   32195 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:28:59.647374   32195 out.go:177] * [pause-20220906152815-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:28:59.688869   32195 notify.go:193] Checking for updates...
	I0906 15:28:59.709648   32195 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:28:59.730836   32195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:28:59.751688   32195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:28:59.772603   32195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:28:59.793912   32195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:28:59.815496   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:28:59.816152   32195 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:28:59.886077   32195 docker.go:137] docker version: linux-20.10.17
	I0906 15:28:59.886228   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.018367   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:28:59.960654625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.061749   32195 out.go:177] * Using the docker driver based on existing profile
	I0906 15:29:00.082772   32195 start.go:284] selected driver: docker
	I0906 15:29:00.082793   32195 start.go:808] validating driver "docker" against &{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.082915   32195 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:29:00.083047   32195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:29:00.214503   32195 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-09-06 22:29:00.158060708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:29:00.216567   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:00.216586   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:00.216601   32195 start_flags.go:310] config:
	{Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:00.238278   32195 out.go:177] * Starting control plane node pause-20220906152815-22187 in cluster pause-20220906152815-22187
	I0906 15:29:00.259224   32195 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:29:00.281291   32195 out.go:177] * Pulling base image ...
	I0906 15:29:00.323997   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:00.324000   32195 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:29:00.324090   32195 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:29:00.324109   32195 cache.go:57] Caching tarball of preloaded images
	I0906 15:29:00.324669   32195 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:29:00.324806   32195 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:29:00.325077   32195 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/config.json ...
	I0906 15:29:00.386985   32195 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:29:00.387002   32195 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:29:00.387013   32195 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:29:00.387060   32195 start.go:364] acquiring machines lock for pause-20220906152815-22187: {Name:mk4180017503fe44437ec5e270ffb6df449347ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:29:00.387152   32195 start.go:368] acquired machines lock for "pause-20220906152815-22187" in 75.414µs
	I0906 15:29:00.387173   32195 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:29:00.387184   32195 fix.go:55] fixHost starting: 
	I0906 15:29:00.387433   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:29:00.453101   32195 fix.go:103] recreateIfNeeded on pause-20220906152815-22187: state=Running err=<nil>
	W0906 15:29:00.453131   32195 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:29:00.474904   32195 out.go:177] * Updating the running docker "pause-20220906152815-22187" container ...
	I0906 15:29:00.516757   32195 machine.go:88] provisioning docker machine ...
	I0906 15:29:00.516816   32195 ubuntu.go:169] provisioning hostname "pause-20220906152815-22187"
	I0906 15:29:00.516938   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.593870   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.594076   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.594094   32195 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220906152815-22187 && echo "pause-20220906152815-22187" | sudo tee /etc/hostname
	I0906 15:29:00.714936   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220906152815-22187
	
	I0906 15:29:00.715006   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:00.779722   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:00.779866   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:00.779880   32195 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220906152815-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220906152815-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220906152815-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:29:00.892102   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:00.892138   32195 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:29:00.892169   32195 ubuntu.go:177] setting up certificates
	I0906 15:29:00.892186   32195 provision.go:83] configureAuth start
	I0906 15:29:00.892256   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:00.956093   32195 provision.go:138] copyHostCerts
	I0906 15:29:00.956278   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:29:00.956289   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:29:00.956389   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:29:00.956593   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:29:00.956603   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:29:00.956659   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:29:00.956797   32195 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:29:00.956802   32195 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:29:00.956860   32195 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:29:00.957005   32195 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.pause-20220906152815-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220906152815-22187]
	I0906 15:29:01.118415   32195 provision.go:172] copyRemoteCerts
	I0906 15:29:01.118478   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:29:01.118520   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.188789   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:01.271983   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:29:01.288103   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:29:01.305007   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0906 15:29:01.321565   32195 provision.go:86] duration metric: configureAuth took 429.36012ms
	I0906 15:29:01.321580   32195 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:29:01.321709   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:29:01.321780   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.387789   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.387938   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.387950   32195 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:29:01.501299   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:29:01.501319   32195 ubuntu.go:71] root file system type: overlay
	I0906 15:29:01.501485   32195 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:29:01.501582   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.567705   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.567859   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.567921   32195 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:29:01.690705   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:29:01.690788   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.756058   32195 main.go:134] libmachine: Using SSH client type: native
	I0906 15:29:01.756216   32195 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 57910 <nil> <nil>}
	I0906 15:29:01.756229   32195 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:29:01.872574   32195 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:29:01.872588   32195 machine.go:91] provisioned docker machine in 1.355807968s
	I0906 15:29:01.872598   32195 start.go:300] post-start starting for "pause-20220906152815-22187" (driver="docker")
	I0906 15:29:01.872603   32195 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:29:01.872682   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:29:01.872729   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:01.938042   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.021994   32195 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:29:02.025757   32195 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:29:02.025772   32195 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:29:02.025778   32195 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:29:02.025784   32195 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:29:02.025793   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:29:02.025908   32195 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:29:02.026042   32195 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:29:02.026194   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:29:02.033762   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:02.053064   32195 start.go:303] post-start completed in 180.456324ms
	I0906 15:29:02.053151   32195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:29:02.053223   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.118998   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.199982   32195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:29:02.204552   32195 fix.go:57] fixHost completed within 1.817365407s
	I0906 15:29:02.204565   32195 start.go:83] releasing machines lock for "pause-20220906152815-22187", held for 1.817401211s
	I0906 15:29:02.204638   32195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220906152815-22187
	I0906 15:29:02.269753   32195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:29:02.269771   32195 ssh_runner.go:195] Run: systemctl --version
	I0906 15:29:02.269830   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.269844   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:02.338654   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.338703   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:29:02.465608   32195 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:29:02.475666   32195 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:29:02.475718   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:29:02.487657   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:29:02.500842   32195 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:29:02.593048   32195 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:29:02.671572   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:02.758296   32195 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:29:09.816595   32195 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.058266504s)
	I0906 15:29:09.816654   32195 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:29:09.937473   32195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:29:10.050359   32195 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:29:10.075080   32195 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:29:10.075165   32195 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:29:10.081157   32195 start.go:471] Will wait 60s for crictl version
	I0906 15:29:10.081228   32195 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:29:10.121153   32195 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:29:10.121228   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:10.197263   32195 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:29:08.814065   31107 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:29:08.814802   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:08.815014   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:10.349185   32195 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:29:10.349272   32195 cli_runner.go:164] Run: docker exec -t pause-20220906152815-22187 dig +short host.docker.internal
	I0906 15:29:10.521205   32195 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:29:10.521330   32195 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:29:10.525445   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:10.594219   32195 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:29:10.594284   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.630834   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.630851   32195 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:29:10.630919   32195 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:29:10.705526   32195 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:29:10.705552   32195 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:29:10.705630   32195 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:29:10.810881   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:10.810895   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:10.810916   32195 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:29:10.810943   32195 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220906152815-22187 NodeName:pause-20220906152815-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:29:10.811060   32195 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220906152815-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:29:10.811159   32195 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220906152815-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:29:10.811225   32195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:29:10.818935   32195 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:29:10.818998   32195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:29:10.825948   32195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0906 15:29:10.838545   32195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:29:10.851235   32195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0906 15:29:10.863636   32195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:29:10.867408   32195 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187 for IP: 192.168.76.2
	I0906 15:29:10.867527   32195 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:29:10.867587   32195 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:29:10.867673   32195 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.key
	I0906 15:29:10.867734   32195 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key.31bdca25
	I0906 15:29:10.867787   32195 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key
	I0906 15:29:10.868011   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:29:10.868048   32195 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:29:10.868057   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:29:10.868104   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:29:10.868136   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:29:10.868165   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:29:10.868240   32195 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:29:10.868791   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:29:10.905280   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:29:10.933507   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:29:10.954052   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:29:10.992199   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:29:11.010534   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:29:11.027152   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:29:11.044585   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:29:11.065209   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:29:11.082218   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:29:11.099206   32195 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:29:11.115721   32195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:29:11.128022   32195 ssh_runner.go:195] Run: openssl version
	I0906 15:29:11.133153   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:29:11.141435   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145438   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.145479   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:29:11.150306   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:29:11.157884   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:29:11.165301   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169161   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.169196   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:29:11.174366   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:29:11.182851   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:29:11.190783   32195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194947   32195 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.194982   32195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:29:11.200109   32195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:29:11.207381   32195 kubeadm.go:396] StartCluster: {Name:pause-20220906152815-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:pause-20220906152815-22187 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:29:11.207477   32195 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:11.237437   32195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:29:11.244799   32195 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:29:11.244813   32195 kubeadm.go:627] restartCluster start
	I0906 15:29:11.244854   32195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:29:11.251631   32195 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.251692   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:29:11.316600   32195 kubeconfig.go:92] found "pause-20220906152815-22187" server: "https://127.0.0.1:57914"
	I0906 15:29:11.317012   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:29:11.317569   32195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:29:11.324940   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:11.324985   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:11.334422   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:11.344770   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:11.344785   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:13.812888   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:13.813101   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:16.347203   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:16.347284   32195 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0906 15:29:16.612497   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:21.615043   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:21.615078   32195 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0906 15:29:21.997851   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:23.806911   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:23.807141   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:27.000257   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:27.200414   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:27.200501   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:27.210198   32195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup
	W0906 15:29:27.217873   32195 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:27.217884   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.050148   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.050200   32195 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0906 15:29:31.294534   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.297376   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.297408   32195 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0906 15:29:31.598448   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:31.600066   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:31.600082   32195 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0906 15:29:32.027578   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.029159   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.029180   32195 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0906 15:29:32.411742   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.414015   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.414042   32195 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0906 15:29:32.919955   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:32.921927   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:32.921954   32195 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0906 15:29:33.532401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:33.534785   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:33.534805   32195 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0906 15:29:34.395688   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:34.398122   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:34.398166   32195 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0906 15:29:35.599387   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:35.601019   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:35.601041   32195 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0906 15:29:37.327004   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:37.328789   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:37.328814   32195 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0906 15:29:38.925505   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:38.927803   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:38.927831   32195 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0906 15:29:41.119401   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:41.121885   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": EOF
	I0906 15:29:41.121915   32195 api_server.go:165] Checking apiserver status ...
	I0906 15:29:41.121989   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:29:41.131832   32195 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:41.131845   32195 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:29:41.131853   32195 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:29:41.131907   32195 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:29:41.164251   32195 docker.go:443] Stopping containers: [e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9]
	I0906 15:29:41.164335   32195 ssh_runner.go:195] Run: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9
	I0906 15:29:43.793507   31107 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:29:43.793709   31107 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:29:46.311600   32195 ssh_runner.go:235] Completed: docker stop e151952fd65e dc28e720c942 f11659194c9b 3d9be5d4c242 b733656d86e9 e3840ce5a18e 247a34f75411 4e466b02eec8 a8b84e6776d8 db12d18fbcf9 3865f67b411a 9cf4eeff8d93 036148d07169 4ea0eac22d19 50779b85909c c383af658559 b5ba814c1776 11733036d67d 1a262e52b8a6 525bc7e632ee 6d1b1ef7972d d2c9846b1e11 714381c26668 ae698ecfa8e3 e3db4859e6d0 e1b2edaa7ac1 48a9d0751dd6 25133ecf29f9: (5.147237759s)
	I0906 15:29:46.311676   32195 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:29:46.346530   32195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:29:46.354429   32195 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2043 Sep  6 22:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:28 /etc/kubernetes/scheduler.conf
	
	I0906 15:29:46.354492   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:29:46.362206   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:29:46.370227   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.379342   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.379408   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:29:46.388502   32195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:29:46.396359   32195 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:29:46.396413   32195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:29:46.403592   32195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411337   32195 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:29:46.411354   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:46.474920   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.599906   32195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.124964786s)
	I0906 15:29:47.599921   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.748957   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.797960   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:47.892070   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:29:47.892139   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:29:47.902983   32195 api_server.go:71] duration metric: took 10.915563ms to wait for apiserver process to appear ...
	I0906 15:29:47.903008   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:29:47.903024   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:52.903579   32195 api_server.go:256] stopped: https://127.0.0.1:57914/healthz: Get "https://127.0.0.1:57914/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 15:29:53.403672   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.473315   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:29:53.473337   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:29:53.904223   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:53.910683   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:53.910696   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.403658   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.409021   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:29:54.409043   32195 api_server.go:102] status: https://127.0.0.1:57914/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:29:54.904615   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:29:54.911776   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:29:54.917907   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:29:54.917917   32195 api_server.go:130] duration metric: took 7.014902451s to wait for apiserver health ...
	I0906 15:29:54.917922   32195 cni.go:95] Creating CNI manager for ""
	I0906 15:29:54.917929   32195 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:29:54.917939   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:29:54.924269   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:29:54.924284   32195 system_pods.go:61] "coredns-565d847f94-tb4pk" [d1b33e2e-6f0b-4dc5-b778-0ed14d441d68] Running
	I0906 15:29:54.924288   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:29:54.924296   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:29:54.924304   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:29:54.924312   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 15:29:54.924320   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:29:54.924324   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:29:54.924328   32195 system_pods.go:74] duration metric: took 6.385314ms to wait for pod list to return data ...
	I0906 15:29:54.924347   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:29:54.927033   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:29:54.927048   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:29:54.927057   32195 node_conditions.go:105] duration metric: took 2.704771ms to run NodePressure ...
	I0906 15:29:54.927070   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:29:55.042701   32195 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046775   32195 kubeadm.go:778] kubelet initialised
	I0906 15:29:55.046787   32195 kubeadm.go:779] duration metric: took 4.072728ms waiting for restarted kubelet to initialise ...
	I0906 15:29:55.046797   32195 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:29:55.052083   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057508   32195 pod_ready.go:92] pod "coredns-565d847f94-tb4pk" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.057517   32195 pod_ready.go:81] duration metric: took 5.421387ms waiting for pod "coredns-565d847f94-tb4pk" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.057523   32195 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062455   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:29:55.062464   32195 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:55.062470   32195 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:29:57.073968   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:29:59.076127   32195 pod_ready.go:102] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:00.072041   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.072054   32195 pod_ready.go:81] duration metric: took 5.009578722s waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.072060   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076123   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:00.076131   32195 pod_ready.go:81] duration metric: took 4.066381ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:00.076140   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088645   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.088658   32195 pod_ready.go:81] duration metric: took 1.012512942s waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.088667   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092757   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.092765   32195 pod_ready.go:81] duration metric: took 4.093235ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.092771   32195 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830691   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:01.830703   32195 pod_ready.go:81] duration metric: took 737.926758ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:01.830709   32195 pod_ready.go:38] duration metric: took 6.783902804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:01.830721   32195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:30:01.838005   32195 ops.go:34] apiserver oom_adj: -16
	I0906 15:30:01.838014   32195 kubeadm.go:631] restartCluster took 50.593170106s
	I0906 15:30:01.838022   32195 kubeadm.go:398] StartCluster complete in 50.630620083s
	I0906 15:30:01.838035   32195 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.838111   32195 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:30:01.838524   32195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:30:01.839343   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.842012   32195 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220906152815-22187" rescaled to 1
	I0906 15:30:01.842040   32195 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:30:01.842045   32195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:30:01.842080   32195 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0906 15:30:01.842183   32195 config.go:180] Loaded profile config "pause-20220906152815-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:30:01.864738   32195 out.go:177] * Verifying Kubernetes components...
	I0906 15:30:01.864871   32195 addons.go:65] Setting default-storageclass=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.864920   32195 addons.go:65] Setting storage-provisioner=true in profile "pause-20220906152815-22187"
	I0906 15:30:01.885987   32195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220906152815-22187"
	I0906 15:30:01.886012   32195 addons.go:153] Setting addon storage-provisioner=true in "pause-20220906152815-22187"
	W0906 15:30:01.886021   32195 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:30:01.886032   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:01.886084   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:01.886351   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.887248   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:01.913914   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:01.913930   32195 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:30:01.966007   32195 kapi.go:59] client config for pause-20220906152815-22187: &rest.Config{Host:"https://127.0.0.1:57914", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/pause-20220906152815-22187/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23257c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 15:30:01.990691   32195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:30:01.994519   32195 addons.go:153] Setting addon default-storageclass=true in "pause-20220906152815-22187"
	W0906 15:30:02.010804   32195 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:30:02.010858   32195 host.go:66] Checking if "pause-20220906152815-22187" exists ...
	I0906 15:30:02.010910   32195 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.010924   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:30:02.011008   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.012004   32195 cli_runner.go:164] Run: docker container inspect pause-20220906152815-22187 --format={{.State.Status}}
	I0906 15:30:02.020398   32195 node_ready.go:35] waiting up to 6m0s for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023954   32195 node_ready.go:49] node "pause-20220906152815-22187" has status "Ready":"True"
	I0906 15:30:02.023964   32195 node_ready.go:38] duration metric: took 3.545297ms waiting for node "pause-20220906152815-22187" to be "Ready" ...
	I0906 15:30:02.023970   32195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:02.082831   32195 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.082844   32195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:30:02.082908   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.082973   32195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220906152815-22187
	I0906 15:30:02.122659   32195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:02.148220   32195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57910 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/pause-20220906152815-22187/id_rsa Username:docker}
	I0906 15:30:02.171051   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:30:02.237224   32195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:30:02.743659   32195 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 15:30:02.763901   32195 addons.go:414] enableAddons completed in 921.833683ms
	I0906 15:30:04.527847   32195 pod_ready.go:102] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"False"
	I0906 15:30:05.528041   32195 pod_ready.go:92] pod "coredns-565d847f94-xxcwh" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.528053   32195 pod_ready.go:81] duration metric: took 3.405376442s waiting for pod "coredns-565d847f94-xxcwh" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.528062   32195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532569   32195 pod_ready.go:92] pod "etcd-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.532578   32195 pod_ready.go:81] duration metric: took 4.510365ms waiting for pod "etcd-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.532584   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722538   32195 pod_ready.go:92] pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:05.722547   32195 pod_ready.go:81] duration metric: took 189.959018ms waiting for pod "kube-apiserver-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:05.722554   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121470   32195 pod_ready.go:92] pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.121479   32195 pod_ready.go:81] duration metric: took 398.921172ms waiting for pod "kube-controller-manager-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.121486   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521236   32195 pod_ready.go:92] pod "kube-proxy-6sj24" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.521246   32195 pod_ready.go:81] duration metric: took 399.75581ms waiting for pod "kube-proxy-6sj24" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.521252   32195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923930   32195 pod_ready.go:92] pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:30:06.923940   32195 pod_ready.go:81] duration metric: took 402.683503ms waiting for pod "kube-scheduler-pause-20220906152815-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:30:06.923946   32195 pod_ready.go:38] duration metric: took 4.899969506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:30:06.923964   32195 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:30:06.924012   32195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:30:06.933539   32195 api_server.go:71] duration metric: took 5.0914834s to wait for apiserver process to appear ...
	I0906 15:30:06.933557   32195 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:30:06.933564   32195 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57914/healthz ...
	I0906 15:30:06.938987   32195 api_server.go:266] https://127.0.0.1:57914/healthz returned 200:
	ok
	I0906 15:30:06.940195   32195 api_server.go:140] control plane version: v1.25.0
	I0906 15:30:06.940204   32195 api_server.go:130] duration metric: took 6.642342ms to wait for apiserver health ...
	I0906 15:30:06.940208   32195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:30:07.123061   32195 system_pods.go:59] 7 kube-system pods found
	I0906 15:30:07.123076   32195 system_pods.go:61] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.123082   32195 system_pods.go:61] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.123086   32195 system_pods.go:61] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.123089   32195 system_pods.go:61] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.123093   32195 system_pods.go:61] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.123098   32195 system_pods.go:61] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.123101   32195 system_pods.go:61] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.123105   32195 system_pods.go:74] duration metric: took 182.893492ms to wait for pod list to return data ...
	I0906 15:30:07.123111   32195 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:30:07.321161   32195 default_sa.go:45] found service account: "default"
	I0906 15:30:07.321172   32195 default_sa.go:55] duration metric: took 198.057494ms for default service account to be created ...
	I0906 15:30:07.321177   32195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:30:07.522693   32195 system_pods.go:86] 7 kube-system pods found
	I0906 15:30:07.522706   32195 system_pods.go:89] "coredns-565d847f94-xxcwh" [50ea7d09-4033-4175-9811-a28207750f60] Running
	I0906 15:30:07.522711   32195 system_pods.go:89] "etcd-pause-20220906152815-22187" [5e63d3ad-42b6-4fb6-b92a-be92b400528c] Running
	I0906 15:30:07.522714   32195 system_pods.go:89] "kube-apiserver-pause-20220906152815-22187" [9cd2014e-f7cc-4e00-8163-0d16fb62a018] Running
	I0906 15:30:07.522718   32195 system_pods.go:89] "kube-controller-manager-pause-20220906152815-22187" [5ada2c0d-3155-4391-bcd5-614b1a8d1f4e] Running
	I0906 15:30:07.522722   32195 system_pods.go:89] "kube-proxy-6sj24" [d3db9b4e-72b8-498d-b2f8-e7f5249cac81] Running
	I0906 15:30:07.522726   32195 system_pods.go:89] "kube-scheduler-pause-20220906152815-22187" [a63785fe-5692-42e7-85e1-850d79c80bb0] Running
	I0906 15:30:07.522731   32195 system_pods.go:89] "storage-provisioner" [1076ba8f-0e79-4f3b-8128-739a0d0814b9] Running
	I0906 15:30:07.522736   32195 system_pods.go:126] duration metric: took 201.555356ms to wait for k8s-apps to be running ...
	I0906 15:30:07.522741   32195 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:30:07.522790   32195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:30:07.532591   32195 system_svc.go:56] duration metric: took 9.845179ms WaitForService to wait for kubelet.
	I0906 15:30:07.532604   32195 kubeadm.go:573] duration metric: took 5.690549978s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:30:07.532618   32195 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:30:07.721563   32195 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:30:07.721574   32195 node_conditions.go:123] node cpu capacity is 6
	I0906 15:30:07.721584   32195 node_conditions.go:105] duration metric: took 188.96208ms to run NodePressure ...
	I0906 15:30:07.721593   32195 start.go:216] waiting for startup goroutines ...
	I0906 15:30:07.755241   32195 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:30:07.779099   32195 out.go:177] * Done! kubectl is now configured to use "pause-20220906152815-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:28:22 UTC, end at Tue 2022-09-06 22:30:13 UTC. --
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.526202416Z" level=info msg="Removing stale sandbox d605f169db0b0560557e19d6b99952df24e6e8237f7a2319b27e4612f0daac56 (c383af658559637a16b100fc863281140e8f58f7f428205c5a83e052d617369f)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.527484748Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 67074503aebcde4977892e78533c054ea32e26b40c32339645a722ce23788480], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.611028786Z" level=info msg="Removing stale sandbox db5560f36934a6b9ea8ebf6e7d4c422ddabe51368227f146ac4773fe65a86f23 (4ea0eac22d199b65bb325f10118ecfe0f5a9c3c5a56f62654129e3671a9c1312)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.645174784Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 87893f8e0000d02925d70b39992410d1ab2ded1d8121a9754f56aaeaeb33b72d], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.732555979Z" level=info msg="Removing stale sandbox f5b0c7b703a41fbcd48d230de7b3ec93c84c812dd5f485a070f5344fb7ea4a27 (1a262e52b8a665d2790c3baf05e276e7deb95a5795761984377a828ef721c2e1)"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.733829489Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint db3f1e0ae700d98b07e6c6d9789316b893375afd8e2b057bc70037fba855c644 4d51000e42546b77d2194111203528e937ea8aca206c241f7bec1e745aeae995], retrying...."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.756304371Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.791309010Z" level=info msg="Loading containers: done."
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.799785218Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.799852952Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:29:09 pause-20220906152815-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.821896749Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:29:09 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:09.823485233Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:29:31 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:31.085187184Z" level=info msg="ignoring event" container=247a34f754110bb2df9dc606061e6222051a04d25c3e8a8478c13e4d1d5005b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.297725215Z" level=info msg="ignoring event" container=3d9be5d4c24228d130ae6ee681a725ba0558416924362a619de554f964be4051 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.297751425Z" level=info msg="ignoring event" container=db12d18fbcf9ed49dff15cf7048ba3d53a00f653c4b0398e2ef265fbfc19d063 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.306292054Z" level=info msg="ignoring event" container=a8b84e6776d87caa43bfb62847b43e04b8e76e62b231ffea4b3f9e86c76df56d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.379913717Z" level=info msg="ignoring event" container=b733656d86e9e02119bf93a0923ebd696c84ef80323a941f22e86e28aa36c585 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.383999115Z" level=info msg="ignoring event" container=9cf4eeff8d9390aad5462175031c996a1928300220131936ca7741d5d7d4376e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.388942245Z" level=info msg="ignoring event" container=3865f67b411a8d5685f9868dba40b6a33b54d700eb5f985a85ac421373615328 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.392438078Z" level=info msg="ignoring event" container=4e466b02eec89ef02aaf2e9cf42bd1289b7b7edc8f3b3d673f12c934e96c9641 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.401470365Z" level=info msg="ignoring event" container=e151952fd65e57ceb7cd70b867433f1a712320eca4d170d65c1f8d4295e10e13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.401495607Z" level=info msg="ignoring event" container=f11659194c9b17223375312f27119dfa16c17d863e19b20c007a6c27a4d66e5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:41 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:41.404829353Z" level=info msg="ignoring event" container=dc28e720c942fc8c611cc2b377de251232abc9ded9d164db61a8a6ea1d3b3952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:29:46 pause-20220906152815-22187 dockerd[3808]: time="2022-09-06T22:29:46.296893503Z" level=info msg="ignoring event" container=e3840ce5a18ea72200453d44c62dddcec33fbda9af162f9d4a08ae58acff60ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	213817229cb26       6e38f40d628db       10 seconds ago       Running             storage-provisioner       0                   7c4fd6c34b2e2
	41d900adb9a39       5185b96f0becf       18 seconds ago       Running             coredns                   2                   3b13571bea99c
	b642d541a180c       58a9a0c6d96f2       18 seconds ago       Running             kube-proxy                2                   4333e8ba35ce2
	674d07e8e349b       a8a176a5d5d69       25 seconds ago       Running             etcd                      3                   934a24d031fdc
	3d38353ac93bb       bef2cf3115095       25 seconds ago       Running             kube-scheduler            3                   47f2f324bd0dc
	fcb0e8f19aa97       1a54c86c03a67       25 seconds ago       Running             kube-controller-manager   3                   1d862e8b756ab
	bbaebf830e5e2       4d2edfd10d3e3       31 seconds ago       Running             kube-apiserver            3                   a7d1629428e0a
	e151952fd65e5       bef2cf3115095       42 seconds ago       Exited              kube-scheduler            2                   9cf4eeff8d939
	dc28e720c942f       1a54c86c03a67       45 seconds ago       Exited              kube-controller-manager   2                   3865f67b411a8
	f11659194c9b1       a8a176a5d5d69       46 seconds ago       Exited              etcd                      2                   a8b84e6776d87
	3d9be5d4c2422       58a9a0c6d96f2       53 seconds ago       Exited              kube-proxy                1                   b733656d86e9e
	e3840ce5a18ea       5185b96f0becf       About a minute ago   Exited              coredns                   1                   4e466b02eec89
	247a34f754110       4d2edfd10d3e3       About a minute ago   Exited              kube-apiserver            2                   db12d18fbcf9e
	
	* 
	* ==> coredns [41d900adb9a3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [e3840ce5a18e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 172.17.0.2:33130->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220906152815-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220906152815-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=pause-20220906152815-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_28_41_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:28:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220906152815-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:29:53 +0000   Tue, 06 Sep 2022 22:28:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-20220906152815-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                91e1b8ff-a171-4174-a0b0-f45dc94c7cd7
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-xxcwh                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     79s
	  kube-system                 etcd-pause-20220906152815-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-pause-20220906152815-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-pause-20220906152815-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-6sj24                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-pause-20220906152815-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 79s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             92s                kubelet          Node pause-20220906152815-22187 status is now: NodeNotReady
	  Normal  NodeReady                82s                kubelet          Node pause-20220906152815-22187 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node pause-20220906152815-22187 event: Registered Node pause-20220906152815-22187 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s (x9 over 26s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x7 over 26s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 26s)  kubelet          Node pause-20220906152815-22187 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                 node-controller  Node pause-20220906152815-22187 event: Registered Node pause-20220906152815-22187 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001536] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001105] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001751] FS-Cache: N-cookie d=000000006f57a5f8 n=0000000004119ae2
	[  +0.001424] FS-Cache: N-key=[8] '89c5800300000000'
	[  +0.002109] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000d596ead8 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001797] FS-Cache: O-cookie d=000000006f57a5f8 n=00000000f83b458d
	[  +0.001466] FS-Cache: O-key=[8] '89c5800300000000'
	[  +0.001134] FS-Cache: N-cookie c=000000004f31e385 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001810] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001458] FS-Cache: N-key=[8] '89c5800300000000'
	[  +3.680989] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=000000003a8c8805 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000057637cac
	[  +0.001460] FS-Cache: O-key=[8] '88c5800300000000'
	[  +0.001144] FS-Cache: N-cookie c=000000000ab19587 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001761] FS-Cache: N-cookie d=000000006f57a5f8 n=00000000c74b00f3
	[  +0.001454] FS-Cache: N-key=[8] '88c5800300000000'
	[  +0.676412] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=00000000dd15d770 [p=00000000352476ed fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=000000006f57a5f8 n=0000000060e892c8
	[  +0.001441] FS-Cache: O-key=[8] '93c5800300000000'
	[  +0.001122] FS-Cache: N-cookie c=00000000e728d4f6 [p=00000000352476ed fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=000000006f57a5f8 n=000000009b87565f
	[  +0.001438] FS-Cache: N-key=[8] '93c5800300000000'
	
	* 
	* ==> etcd [674d07e8e349] <==
	* {"level":"info","ts":"2022-09-06T22:29:48.579Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:29:48.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:29:48.582Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220906152815-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:50.425Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:50.426Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:29:50.427Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [f11659194c9b] <==
	* {"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:27.543Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220906152815-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:28.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:28.839Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:29:41.286Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:29:41.286Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220906152815-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/09/06 22:29:41 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:29:41 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:29:41.289Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-09-06T22:29:41.291Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:41.292Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:29:41.292Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220906152815-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:30:14 up 46 min,  0 users,  load average: 0.76, 0.75, 0.61
	Linux pause-20220906152815-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [247a34f75411] <==
	* W0906 22:29:20.912046       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:29:21.699298       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:29:26.116297       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0906 22:29:31.061587       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [bbaebf830e5e] <==
	* I0906 22:29:53.396280       1 controller.go:85] Starting OpenAPI V3 controller
	I0906 22:29:53.396288       1 naming_controller.go:291] Starting NamingConditionController
	I0906 22:29:53.396308       1 establishing_controller.go:76] Starting EstablishingController
	I0906 22:29:53.396334       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 22:29:53.396345       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 22:29:53.396352       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 22:29:53.482259       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 22:29:53.482475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 22:29:53.483537       1 cache.go:39] Caches are synced for autoregister controller
	I0906 22:29:53.495248       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:29:53.496346       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 22:29:53.575264       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:29:53.582016       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 22:29:53.582111       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:29:53.583603       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:29:53.587525       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:29:54.205424       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:29:54.384282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:29:55.002613       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:29:55.008805       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:29:55.026400       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:29:55.039815       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:29:55.045517       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:30:02.723179       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:30:06.589542       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [dc28e720c942] <==
	* I0906 22:29:29.194529       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:29:29.945571       1 controllermanager.go:178] Version: v1.25.0
	I0906 22:29:29.945613       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:29.946384       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 22:29:29.946410       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 22:29:29.946555       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:29:29.946650       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [fcb0e8f19aa9] <==
	* I0906 22:30:06.406109       1 shared_informer.go:262] Caches are synced for TTL
	I0906 22:30:06.408485       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0906 22:30:06.411218       1 shared_informer.go:262] Caches are synced for node
	I0906 22:30:06.411247       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:30:06.411251       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:30:06.411257       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:30:06.413887       1 shared_informer.go:262] Caches are synced for PVC protection
	I0906 22:30:06.414014       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0906 22:30:06.415111       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:30:06.416034       1 shared_informer.go:262] Caches are synced for ephemeral
	I0906 22:30:06.417538       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0906 22:30:06.419039       1 shared_informer.go:262] Caches are synced for job
	I0906 22:30:06.421848       1 shared_informer.go:262] Caches are synced for disruption
	I0906 22:30:06.423029       1 shared_informer.go:262] Caches are synced for cronjob
	I0906 22:30:06.501991       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 22:30:06.511190       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:30:06.512034       1 shared_informer.go:262] Caches are synced for PV protection
	I0906 22:30:06.527894       1 shared_informer.go:262] Caches are synced for expand
	I0906 22:30:06.554682       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:30:06.582349       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 22:30:06.594201       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:30:06.597666       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:30:06.912443       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:30:06.981112       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:30:06.981158       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [3d9be5d4c242] <==
	* E0906 22:29:30.624644       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": net/http: TLS handshake timeout
	E0906 22:29:31.753675       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:33.812471       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:38.154090       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220906152815-22187": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [b642d541a180] <==
	* I0906 22:29:55.156800       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:29:55.156866       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:29:55.156880       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:29:55.176646       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:29:55.176692       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:29:55.176701       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:29:55.176809       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:29:55.176851       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:29:55.176967       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:29:55.177243       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:29:55.177269       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:55.177763       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:29:55.177788       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:29:55.177804       1 config.go:317] "Starting service config controller"
	I0906 22:29:55.177821       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:29:55.178418       1 config.go:444] "Starting node config controller"
	I0906 22:29:55.178634       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:29:55.278024       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:29:55.278064       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:29:55.279301       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d38353ac93b] <==
	* I0906 22:29:48.785814       1 serving.go:348] Generated self-signed cert in-memory
	I0906 22:29:53.501861       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:29:53.501897       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:29:53.508019       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:29:53.508100       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0906 22:29:53.508128       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0906 22:29:53.508147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:29:53.510408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:29:53.510450       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:29:53.510469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0906 22:29:53.510472       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:29:53.608458       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0906 22:29:53.610990       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 22:29:53.611062       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e151952fd65e] <==
	* W0906 22:29:39.011592       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.011678       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.013443       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.013508       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.124319       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.124382       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.124342       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.124407       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:39.409414       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:39.409532       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.027851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.027899       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.129992       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.130035       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.865928       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.866014       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.919018       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.919076       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.945455       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.945510       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:40.966721       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:40.966800       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0906 22:29:41.201649       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:41.201692       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0906 22:29:41.281602       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:28:22 UTC, end at Tue 2022-09-06 22:30:15 UTC. --
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: E0906 22:29:53.287835    6070 kubelet.go:2448] "Error getting node" err="node \"pause-20220906152815-22187\" not found"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: E0906 22:29:53.388496    6070 kubelet.go:2448] "Error getting node" err="node \"pause-20220906152815-22187\" not found"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.489137    6070 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.489664    6070 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.576502    6070 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220906152815-22187"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.576679    6070 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220906152815-22187"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.824452    6070 apiserver.go:52] "Watching apiserver"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827536    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827649    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.827693    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919050    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9g2\" (UniqueName: \"kubernetes.io/projected/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-kube-api-access-2w9g2\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919099    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-lib-modules\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919122    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50ea7d09-4033-4175-9811-a28207750f60-config-volume\") pod \"coredns-565d847f94-xxcwh\" (UID: \"50ea7d09-4033-4175-9811-a28207750f60\") " pod="kube-system/coredns-565d847f94-xxcwh"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919139    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-xtables-lock\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919195    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3db9b4e-72b8-498d-b2f8-e7f5249cac81-kube-proxy\") pod \"kube-proxy-6sj24\" (UID: \"d3db9b4e-72b8-498d-b2f8-e7f5249cac81\") " pod="kube-system/kube-proxy-6sj24"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919218    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chw5k\" (UniqueName: \"kubernetes.io/projected/50ea7d09-4033-4175-9811-a28207750f60-kube-api-access-chw5k\") pod \"coredns-565d847f94-xxcwh\" (UID: \"50ea7d09-4033-4175-9811-a28207750f60\") " pod="kube-system/coredns-565d847f94-xxcwh"
	Sep 06 22:29:53 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:53.919226    6070 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.029349    6070 scope.go:115] "RemoveContainer" containerID="3d9be5d4c24228d130ae6ee681a725ba0558416924362a619de554f964be4051"
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.080660    6070 request.go:601] Waited for 1.058123734s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Sep 06 22:29:55 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:55.329157    6070 scope.go:115] "RemoveContainer" containerID="e3840ce5a18ea72200453d44c62dddcec33fbda9af162f9d4a08ae58acff60ab"
	Sep 06 22:29:57 pause-20220906152815-22187 kubelet[6070]: I0906 22:29:57.925866    6070 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1b33e2e-6f0b-4dc5-b778-0ed14d441d68 path="/var/lib/kubelet/pods/d1b33e2e-6f0b-4dc5-b778-0ed14d441d68/volumes"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.734083    6070 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.892427    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpd6\" (UniqueName: \"kubernetes.io/projected/1076ba8f-0e79-4f3b-8128-739a0d0814b9-kube-api-access-lxpd6\") pod \"storage-provisioner\" (UID: \"1076ba8f-0e79-4f3b-8128-739a0d0814b9\") " pod="kube-system/storage-provisioner"
	Sep 06 22:30:02 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:02.892549    6070 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1076ba8f-0e79-4f3b-8128-739a0d0814b9-tmp\") pod \"storage-provisioner\" (UID: \"1076ba8f-0e79-4f3b-8128-739a0d0814b9\") " pod="kube-system/storage-provisioner"
	Sep 06 22:30:05 pause-20220906152815-22187 kubelet[6070]: I0906 22:30:05.246195    6070 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	
	* 
	* ==> storage-provisioner [213817229cb2] <==
	* I0906 22:30:03.232887       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:30:03.240863       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:30:03.240914       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:30:03.245913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:30:03.246197       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1fe8a74-cd3a-4c34-8785-5749bf60c74d", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759 became leader
	I0906 22:30:03.246432       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759!
	I0906 22:30:03.346988       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220906152815-22187_5fb5a6d6-f8de-439c-b22c-47c264d84759!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220906152815-22187 -n pause-20220906152815-22187
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220906152815-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220906152815-22187 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220906152815-22187 describe pod : exit status 1 (37.586405ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220906152815-22187 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (76.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.120465758s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.118269947s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108359523s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.120053314s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.116963891s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.131833701s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0906 15:41:37.549522   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.555355   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.565496   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.585931   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.626052   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.706296   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:37.867789   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:38.187969   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:38.828813   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:41:40.111030   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0906 15:41:47.795054   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114782709s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (54.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.742793916s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:41:43.729188   35815 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:41:43.729415   35815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:41:43.729421   35815 out.go:309] Setting ErrFile to fd 2...
	I0906 15:41:43.729425   35815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:41:43.729531   35815 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:41:43.730080   35815 out.go:303] Setting JSON to false
	I0906 15:41:43.745188   35815 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9674,"bootTime":1662494429,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:41:43.745297   35815 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:41:43.766462   35815 out.go:177] * [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:41:43.809875   35815 notify.go:193] Checking for updates...
	I0906 15:41:43.831352   35815 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:41:43.852467   35815 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:41:43.873728   35815 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:41:43.895379   35815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:41:43.916735   35815 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:41:43.939420   35815 config.go:180] Loaded profile config "kubenet-20220906152522-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:41:43.939521   35815 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:41:44.008769   35815 docker.go:137] docker version: linux-20.10.17
	I0906 15:41:44.008881   35815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:41:44.194427   35815 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:41:44.127133174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:41:44.236426   35815 out.go:177] * Using the docker driver based on user configuration
	I0906 15:41:44.257405   35815 start.go:284] selected driver: docker
	I0906 15:41:44.257418   35815 start.go:808] validating driver "docker" against <nil>
	I0906 15:41:44.257434   35815 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:41:44.259676   35815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:41:44.391231   35815 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:41:44.32441657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:41:44.391351   35815 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 15:41:44.391514   35815 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:41:44.412713   35815 out.go:177] * Using Docker Desktop driver with root privileges
	I0906 15:41:44.433610   35815 cni.go:95] Creating CNI manager for ""
	I0906 15:41:44.433631   35815 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:41:44.433640   35815 start_flags.go:310] config:
	{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:41:44.454517   35815 out.go:177] * Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	I0906 15:41:44.475456   35815 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:41:44.496614   35815 out.go:177] * Pulling base image ...
	I0906 15:41:44.517548   35815 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:41:44.517629   35815 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:41:44.517631   35815 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:41:44.517660   35815 cache.go:57] Caching tarball of preloaded images
	I0906 15:41:44.517843   35815 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:41:44.517860   35815 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 15:41:44.518759   35815 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:41:44.518875   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json: {Name:mk596860f4aa7c067a0623f36a64c9f4cee09676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:44.581173   35815 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:41:44.581209   35815 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:41:44.581223   35815 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:41:44.581280   35815 start.go:364] acquiring machines lock for old-k8s-version-20220906154143-22187: {Name:mkf6412c70024633cc757c4659ae827dd641d20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:41:44.581432   35815 start.go:368] acquired machines lock for "old-k8s-version-20220906154143-22187" in 140.516µs
	I0906 15:41:44.581460   35815 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:41:44.581547   35815 start.go:125] createHost starting for "" (driver="docker")
	I0906 15:41:44.625109   35815 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 15:41:44.625402   35815 start.go:159] libmachine.API.Create for "old-k8s-version-20220906154143-22187" (driver="docker")
	I0906 15:41:44.625443   35815 client.go:168] LocalClient.Create starting
	I0906 15:41:44.625591   35815 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem
	I0906 15:41:44.625651   35815 main.go:134] libmachine: Decoding PEM data...
	I0906 15:41:44.625673   35815 main.go:134] libmachine: Parsing certificate...
	I0906 15:41:44.625758   35815 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem
	I0906 15:41:44.625795   35815 main.go:134] libmachine: Decoding PEM data...
	I0906 15:41:44.625811   35815 main.go:134] libmachine: Parsing certificate...
	I0906 15:41:44.626423   35815 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220906154143-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 15:41:44.687696   35815 cli_runner.go:211] docker network inspect old-k8s-version-20220906154143-22187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 15:41:44.687802   35815 network_create.go:272] running [docker network inspect old-k8s-version-20220906154143-22187] to gather additional debugging logs...
	I0906 15:41:44.687824   35815 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220906154143-22187
	W0906 15:41:44.749626   35815 cli_runner.go:211] docker network inspect old-k8s-version-20220906154143-22187 returned with exit code 1
	I0906 15:41:44.749647   35815 network_create.go:275] error running [docker network inspect old-k8s-version-20220906154143-22187]: docker network inspect old-k8s-version-20220906154143-22187: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220906154143-22187
	I0906 15:41:44.749667   35815 network_create.go:277] output of [docker network inspect old-k8s-version-20220906154143-22187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220906154143-22187
	
	** /stderr **
	I0906 15:41:44.749755   35815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 15:41:44.811884   35815 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00065ac08] misses:0}
	I0906 15:41:44.811923   35815 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:41:44.811937   35815 network_create.go:115] attempt to create docker network old-k8s-version-20220906154143-22187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 15:41:44.812019   35815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 old-k8s-version-20220906154143-22187
	W0906 15:41:44.873117   35815 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 old-k8s-version-20220906154143-22187 returned with exit code 1
	W0906 15:41:44.873154   35815 network_create.go:107] failed to create docker network old-k8s-version-20220906154143-22187 192.168.49.0/24, will retry: subnet is taken
	I0906 15:41:44.873403   35815 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065ac08] amended:false}} dirty:map[] misses:0}
	I0906 15:41:44.873420   35815 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:41:44.873637   35815 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065ac08] amended:true}} dirty:map[192.168.49.0:0xc00065ac08 192.168.58.0:0xc00058c378] misses:0}
	I0906 15:41:44.873662   35815 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:41:44.873698   35815 network_create.go:115] attempt to create docker network old-k8s-version-20220906154143-22187 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0906 15:41:44.873762   35815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 old-k8s-version-20220906154143-22187
	W0906 15:41:44.935695   35815 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 old-k8s-version-20220906154143-22187 returned with exit code 1
	W0906 15:41:44.935725   35815 network_create.go:107] failed to create docker network old-k8s-version-20220906154143-22187 192.168.58.0/24, will retry: subnet is taken
	I0906 15:41:44.936016   35815 network.go:281] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065ac08] amended:true}} dirty:map[192.168.49.0:0xc00065ac08 192.168.58.0:0xc00058c378] misses:1}
	I0906 15:41:44.936034   35815 network.go:239] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:41:44.936227   35815 network.go:290] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065ac08] amended:true}} dirty:map[192.168.49.0:0xc00065ac08 192.168.58.0:0xc00058c378 192.168.67.0:0xc00058c3b0] misses:1}
	I0906 15:41:44.936238   35815 network.go:236] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0906 15:41:44.936246   35815 network_create.go:115] attempt to create docker network old-k8s-version-20220906154143-22187 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0906 15:41:44.936313   35815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 old-k8s-version-20220906154143-22187
	I0906 15:41:45.032945   35815 network_create.go:99] docker network old-k8s-version-20220906154143-22187 192.168.67.0/24 created
	I0906 15:41:45.032988   35815 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220906154143-22187" container
	I0906 15:41:45.033078   35815 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 15:41:45.096274   35815 cli_runner.go:164] Run: docker volume create old-k8s-version-20220906154143-22187 --label name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 --label created_by.minikube.sigs.k8s.io=true
	I0906 15:41:45.158561   35815 oci.go:103] Successfully created a docker volume old-k8s-version-20220906154143-22187
	I0906 15:41:45.158687   35815 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220906154143-22187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 --entrypoint /usr/bin/test -v old-k8s-version-20220906154143-22187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -d /var/lib
	I0906 15:41:45.610386   35815 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220906154143-22187
	I0906 15:41:45.610442   35815 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:41:45.610456   35815 kic.go:179] Starting extracting preloaded images to volume ...
	I0906 15:41:45.610541   35815 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220906154143-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 15:41:49.408177   35815 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220906154143-22187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d -I lz4 -xf /preloaded.tar -C /extractDir: (3.797583242s)
	I0906 15:41:49.408197   35815 kic.go:188] duration metric: took 3.797740 seconds to extract preloaded images to volume
	I0906 15:41:49.408320   35815 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 15:41:49.544125   35815 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220906154143-22187 --name old-k8s-version-20220906154143-22187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220906154143-22187 --network old-k8s-version-20220906154143-22187 --ip 192.168.67.2 --volume old-k8s-version-20220906154143-22187:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d
	I0906 15:41:50.054028   35815 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Running}}
	I0906 15:41:50.280631   35815 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:41:50.355100   35815 cli_runner.go:164] Run: docker exec old-k8s-version-20220906154143-22187 stat /var/lib/dpkg/alternatives/iptables
	I0906 15:41:50.467697   35815 oci.go:144] the created container "old-k8s-version-20220906154143-22187" has a running status.
	I0906 15:41:50.467726   35815 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa...
	I0906 15:41:50.691989   35815 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 15:41:50.811146   35815 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:41:50.903580   35815 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 15:41:50.903600   35815 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220906154143-22187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 15:41:51.141788   35815 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:41:51.255645   35815 machine.go:88] provisioning docker machine ...
	I0906 15:41:51.255688   35815 ubuntu.go:169] provisioning hostname "old-k8s-version-20220906154143-22187"
	I0906 15:41:51.255779   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:51.322041   35815 main.go:134] libmachine: Using SSH client type: native
	I0906 15:41:51.322234   35815 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59416 <nil> <nil>}
	I0906 15:41:51.322255   35815 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220906154143-22187 && echo "old-k8s-version-20220906154143-22187" | sudo tee /etc/hostname
	I0906 15:41:51.443099   35815 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220906154143-22187
	
	I0906 15:41:51.443189   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:51.507846   35815 main.go:134] libmachine: Using SSH client type: native
	I0906 15:41:51.508028   35815 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59416 <nil> <nil>}
	I0906 15:41:51.508046   35815 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220906154143-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220906154143-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220906154143-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:41:51.621295   35815 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:41:51.621313   35815 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:41:51.621331   35815 ubuntu.go:177] setting up certificates
	I0906 15:41:51.621338   35815 provision.go:83] configureAuth start
	I0906 15:41:51.621405   35815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:41:51.785586   35815 provision.go:138] copyHostCerts
	I0906 15:41:51.785691   35815 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:41:51.785705   35815 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:41:51.785808   35815 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:41:51.786003   35815 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:41:51.786012   35815 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:41:51.786100   35815 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:41:51.786258   35815 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:41:51.786264   35815 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:41:51.786323   35815 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:41:51.786463   35815 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220906154143-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220906154143-22187]
	I0906 15:41:52.014177   35815 provision.go:172] copyRemoteCerts
	I0906 15:41:52.014277   35815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:41:52.014338   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:52.202533   35815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59416 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:41:52.285143   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:41:52.341626   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0906 15:41:52.360005   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:41:52.377714   35815 provision.go:86] duration metric: configureAuth took 756.35406ms
	I0906 15:41:52.377742   35815 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:41:52.377893   35815 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:41:52.377949   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:52.524141   35815 main.go:134] libmachine: Using SSH client type: native
	I0906 15:41:52.524302   35815 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59416 <nil> <nil>}
	I0906 15:41:52.524330   35815 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:41:52.634876   35815 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:41:52.634887   35815 ubuntu.go:71] root file system type: overlay
	I0906 15:41:52.635044   35815 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:41:52.635122   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:52.768400   35815 main.go:134] libmachine: Using SSH client type: native
	I0906 15:41:52.768566   35815 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59416 <nil> <nil>}
	I0906 15:41:52.768616   35815 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:41:52.895896   35815 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:41:52.895947   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:52.962893   35815 main.go:134] libmachine: Using SSH client type: native
	I0906 15:41:52.963067   35815 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59416 <nil> <nil>}
	I0906 15:41:52.963080   35815 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:41:53.645736   35815 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-09-06 22:41:52.904611496 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 15:41:53.645757   35815 machine.go:91] provisioned docker machine in 2.390092865s
	I0906 15:41:53.645764   35815 client.go:171] LocalClient.Create took 9.02031576s
	I0906 15:41:53.645783   35815 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-20220906154143-22187" took 9.020378441s
	I0906 15:41:53.645797   35815 start.go:300] post-start starting for "old-k8s-version-20220906154143-22187" (driver="docker")
	I0906 15:41:53.645802   35815 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:41:53.645872   35815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:41:53.645927   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:53.714610   35815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59416 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:41:53.808529   35815 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:41:53.813982   35815 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:41:53.813997   35815 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:41:53.814003   35815 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:41:53.814008   35815 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:41:53.814018   35815 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:41:53.814134   35815 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:41:53.814290   35815 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:41:53.814466   35815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:41:53.822427   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:41:53.839573   35815 start.go:303] post-start completed in 193.767276ms
	I0906 15:41:53.840117   35815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:41:53.914467   35815 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:41:53.914887   35815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:41:53.914935   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:53.990844   35815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59416 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:41:54.069568   35815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:41:54.074348   35815 start.go:128] duration metric: createHost completed in 9.492792568s
	I0906 15:41:54.074363   35815 start.go:83] releasing machines lock for "old-k8s-version-20220906154143-22187", held for 9.492921618s
	I0906 15:41:54.074441   35815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:41:54.160805   35815 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:41:54.160809   35815 ssh_runner.go:195] Run: systemctl --version
	I0906 15:41:54.160891   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:54.160924   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:54.228665   35815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59416 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:41:54.228775   35815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59416 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:41:54.470611   35815 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:41:54.480545   35815 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:41:54.480602   35815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:41:54.489665   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:41:54.502824   35815 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:41:54.565758   35815 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:41:54.631073   35815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:41:54.695325   35815 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:41:54.906702   35815 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:41:54.945380   35815 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:41:55.017877   35815 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0906 15:41:55.017982   35815 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220906154143-22187 dig +short host.docker.internal
	I0906 15:41:55.143234   35815 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:41:55.143331   35815 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:41:55.148507   35815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:41:55.159656   35815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:41:55.227963   35815 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:41:55.228045   35815 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:41:55.259904   35815 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:41:55.259923   35815 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:41:55.260005   35815 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:41:55.296160   35815 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:41:55.296178   35815 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:41:55.296264   35815 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:41:55.379992   35815 cni.go:95] Creating CNI manager for ""
	I0906 15:41:55.380005   35815 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:41:55.380018   35815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:41:55.380034   35815 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220906154143-22187 NodeName:old-k8s-version-20220906154143-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:41:55.380160   35815 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220906154143-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220906154143-22187
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:41:55.380248   35815 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220906154143-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:41:55.380303   35815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0906 15:41:55.388961   35815 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:41:55.389031   35815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:41:55.397909   35815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0906 15:41:55.412313   35815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:41:55.425760   35815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0906 15:41:55.440821   35815 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:41:55.445228   35815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:41:55.455708   35815 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187 for IP: 192.168.67.2
	I0906 15:41:55.455841   35815 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:41:55.455889   35815 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:41:55.455931   35815 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key
	I0906 15:41:55.455944   35815 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.crt with IP's: []
	I0906 15:41:55.585969   35815 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.crt ...
	I0906 15:41:55.585985   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.crt: {Name:mk1505cf30a2b007025128e6aedc865e8245166b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.586303   35815 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key ...
	I0906 15:41:55.586317   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key: {Name:mk045d4247250d43fa577c2c3ea9fb89445721c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.586521   35815 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e
	I0906 15:41:55.586536   35815 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 15:41:55.726877   35815 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt.c7fa3a9e ...
	I0906 15:41:55.726898   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt.c7fa3a9e: {Name:mkd05d27ea31b316564977cae5183d5bea7443c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.727244   35815 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e ...
	I0906 15:41:55.727253   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e: {Name:mk3dae47afc1bda6798170c7998bfddd46afac5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.727458   35815 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt
	I0906 15:41:55.727636   35815 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key
	I0906 15:41:55.727811   35815 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key
	I0906 15:41:55.727828   35815 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt with IP's: []
	I0906 15:41:55.899373   35815 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt ...
	I0906 15:41:55.899389   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt: {Name:mk92ae17b2104cea7dbc2beb4f763fdd8d427b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.899663   35815 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key ...
	I0906 15:41:55.899671   35815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key: {Name:mk3402b52a7c0c839926d27210770fc1270b22ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:41:55.900030   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:41:55.900073   35815 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:41:55.900082   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:41:55.900117   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:41:55.900157   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:41:55.900192   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:41:55.900255   35815 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:41:55.900707   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:41:55.919140   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:41:55.936903   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:41:55.954461   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:41:55.973417   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:41:55.991209   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:41:56.008976   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:41:56.028814   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:41:56.045545   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:41:56.062095   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:41:56.079053   35815 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:41:56.095859   35815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:41:56.108772   35815 ssh_runner.go:195] Run: openssl version
	I0906 15:41:56.114247   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:41:56.122443   35815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:41:56.126370   35815 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:41:56.126414   35815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:41:56.131730   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:41:56.139238   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:41:56.147790   35815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:41:56.152179   35815 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:41:56.152223   35815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:41:56.157800   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:41:56.166493   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:41:56.174679   35815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:41:56.178645   35815 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:41:56.178688   35815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:41:56.184909   35815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:41:56.193511   35815 kubeadm.go:396] StartCluster: {Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:41:56.193602   35815 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:41:56.223639   35815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:41:56.231254   35815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:41:56.238669   35815 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:41:56.238717   35815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:41:56.246088   35815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:41:56.246113   35815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:41:56.290274   35815 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:41:56.290326   35815 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:41:56.612947   35815 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:41:56.613027   35815 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:41:56.613120   35815 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:41:56.895291   35815 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:41:56.896297   35815 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:41:56.902494   35815 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:41:56.960890   35815 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:41:57.003042   35815 out.go:204]   - Generating certificates and keys ...
	I0906 15:41:57.003132   35815 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:41:57.003237   35815 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:41:57.293078   35815 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 15:41:57.420671   35815 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0906 15:41:57.806645   35815 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0906 15:41:58.049833   35815 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0906 15:41:58.143766   35815 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0906 15:41:58.143897   35815 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:41:58.483059   35815 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0906 15:41:58.483170   35815 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0906 15:41:58.762546   35815 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 15:41:58.919225   35815 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 15:41:59.210599   35815 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0906 15:41:59.210819   35815 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:41:59.561364   35815 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:41:59.733662   35815 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:41:59.848584   35815 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:42:00.013451   35815 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:42:00.014267   35815 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:42:00.042867   35815 out.go:204]   - Booting up control plane ...
	I0906 15:42:00.042970   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:42:00.043068   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:42:00.043133   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:42:00.043218   35815 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:42:00.043364   35815 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:42:39.995396   35815 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:42:39.996060   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:42:39.996288   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:42:44.993135   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:42:44.993303   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:42:54.987985   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:42:54.988174   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:43:14.974649   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:43:14.974793   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:43:54.948061   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:43:54.948271   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:43:54.948296   35815 kubeadm.go:317] 
	I0906 15:43:54.948373   35815 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:43:54.948446   35815 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:43:54.948461   35815 kubeadm.go:317] 
	I0906 15:43:54.948496   35815 kubeadm.go:317] This error is likely caused by:
	I0906 15:43:54.948533   35815 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:43:54.948655   35815 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:43:54.948668   35815 kubeadm.go:317] 
	I0906 15:43:54.948765   35815 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:43:54.948795   35815 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:43:54.948824   35815 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:43:54.948832   35815 kubeadm.go:317] 
	I0906 15:43:54.948951   35815 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:43:54.949053   35815 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:43:54.949154   35815 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:43:54.949217   35815 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:43:54.949280   35815 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:43:54.949308   35815 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:43:54.951922   35815 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:43:54.952034   35815 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:43:54.952111   35815 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:43:54.952164   35815 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:43:54.952224   35815 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:43:54.952365   35815 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220906154143-22187 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:43:54.952393   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:43:55.375087   35815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:43:55.384472   35815 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:43:55.384522   35815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:43:55.391553   35815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:43:55.391574   35815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:43:55.439958   35815 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:43:55.440022   35815 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:43:55.756199   35815 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:43:55.756298   35815 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:43:55.756394   35815 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:43:56.031864   35815 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:43:56.032504   35815 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:43:56.039605   35815 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:43:56.108801   35815 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:43:56.130871   35815 out.go:204]   - Generating certificates and keys ...
	I0906 15:43:56.130944   35815 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:43:56.131018   35815 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:43:56.131100   35815 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:43:56.131196   35815 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:43:56.131300   35815 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:43:56.131375   35815 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:43:56.131446   35815 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:43:56.131503   35815 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:43:56.131565   35815 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:43:56.131622   35815 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:43:56.131661   35815 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:43:56.131710   35815 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:43:56.497958   35815 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:43:56.624222   35815 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:43:56.692141   35815 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:43:56.808415   35815 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:43:56.809365   35815 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:43:56.830788   35815 out.go:204]   - Booting up control plane ...
	I0906 15:43:56.830949   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:43:56.831081   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:43:56.831185   35815 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:43:56.831316   35815 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:43:56.831549   35815 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:44:36.792451   35815 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:44:36.793776   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:44:36.793975   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:44:41.796844   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:44:41.796991   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:44:51.797815   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:44:51.797957   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:45:11.790322   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:45:11.790492   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:45:51.764711   35815 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:45:51.764950   35815 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:45:51.764966   35815 kubeadm.go:317] 
	I0906 15:45:51.765002   35815 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:45:51.765063   35815 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:45:51.765078   35815 kubeadm.go:317] 
	I0906 15:45:51.765118   35815 kubeadm.go:317] This error is likely caused by:
	I0906 15:45:51.765155   35815 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:45:51.765264   35815 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:45:51.765277   35815 kubeadm.go:317] 
	I0906 15:45:51.765357   35815 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:45:51.765393   35815 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:45:51.765420   35815 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:45:51.765424   35815 kubeadm.go:317] 
	I0906 15:45:51.765497   35815 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:45:51.765563   35815 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:45:51.765632   35815 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:45:51.765672   35815 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:45:51.765732   35815 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:45:51.765753   35815 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:45:51.768990   35815 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:45:51.769108   35815 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:45:51.769189   35815 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:45:51.769280   35815 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:45:51.769338   35815 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:45:51.769367   35815 kubeadm.go:398] StartCluster complete in 3m55.553264145s
	I0906 15:45:51.769435   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:45:51.798602   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.798615   35815 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:45:51.798668   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:45:51.827813   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.827825   35815 logs.go:276] No container was found matching "etcd"
	I0906 15:45:51.827886   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:45:51.856802   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.856815   35815 logs.go:276] No container was found matching "coredns"
	I0906 15:45:51.856876   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:45:51.887964   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.887976   35815 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:45:51.888047   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:45:51.919888   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.919900   35815 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:45:51.919982   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:45:51.950771   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.950789   35815 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:45:51.950852   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:45:51.979531   35815 logs.go:274] 0 containers: []
	W0906 15:45:51.979544   35815 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:45:51.979599   35815 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:45:52.010593   35815 logs.go:274] 0 containers: []
	W0906 15:45:52.010605   35815 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:45:52.010613   35815 logs.go:123] Gathering logs for dmesg ...
	I0906 15:45:52.010620   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:45:52.023098   35815 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:45:52.023110   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:45:52.075718   35815 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:45:52.075730   35815 logs.go:123] Gathering logs for Docker ...
	I0906 15:45:52.075737   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:45:52.091302   35815 logs.go:123] Gathering logs for container status ...
	I0906 15:45:52.091314   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:45:54.143067   35815 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051714027s)
	I0906 15:45:54.143208   35815 logs.go:123] Gathering logs for kubelet ...
	I0906 15:45:54.143215   35815 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0906 15:45:54.185008   35815 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:45:54.185027   35815 out.go:239] * 
	* 
	W0906 15:45:54.185169   35815 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:45:54.185182   35815 out.go:239] * 
	* 
	W0906 15:45:54.185812   35815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:45:54.247989   35815 out.go:177] 
	W0906 15:45:54.322252   35815 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:45:54.322399   35815 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:45:54.322455   35815 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:45:54.379951   35815 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:41:49.947340708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac26f8e95c47984d4f033eccb6473e2db1bc4b07e981e1c31c620ad3db239966",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59414"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ac26f8e95c47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "96050ea0ab9bfeeb7e9e137674d4e2b254dd8e8ae8f463ca42529fed01f0431b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 6 (425.886549ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:45:54.958819   36459 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220906154143-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220906154143-22187 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220906154143-22187 create -f testdata/busybox.yaml: exit status 1 (34.667167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220906154143-22187" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220906154143-22187 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:41:49.947340708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac26f8e95c47984d4f033eccb6473e2db1bc4b07e981e1c31c620ad3db239966",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59414"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ac26f8e95c47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "96050ea0ab9bfeeb7e9e137674d4e2b254dd8e8ae8f463ca42529fed01f0431b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 6 (416.014112ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:45:55.476005   36472 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220906154143-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:41:49.947340708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac26f8e95c47984d4f033eccb6473e2db1bc4b07e981e1c31c620ad3db239966",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59414"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ac26f8e95c47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "96050ea0ab9bfeeb7e9e137674d4e2b254dd8e8ae8f463ca42529fed01f0431b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 6 (417.757292ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:45:55.959352   36486 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220906154143-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220906154143-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0906 15:46:04.587929   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:07.025884   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:18.122623   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:20.040943   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:46:21.567922   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.327579   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.333282   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.343397   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.364655   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.406359   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.486540   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.647330   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:24.967457   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:25.070341   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:25.609544   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:26.890143   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:29.450482   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:32.916959   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:34.570839   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:46:37.571765   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:46:44.813158   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:47:05.263210   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:47:05.293436   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:47:06.031716   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:47:24.354965   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220906154143-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.167297205s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220906154143-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220906154143-22187 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220906154143-22187 describe deploy/metrics-server -n kube-system: exit status 1 (34.434315ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220906154143-22187" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220906154143-22187 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:41:49.947340708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac26f8e95c47984d4f033eccb6473e2db1bc4b07e981e1c31c620ad3db239966",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59414"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ac26f8e95c47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "96050ea0ab9bfeeb7e9e137674d4e2b254dd8e8ae8f463ca42529fed01f0431b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 6 (423.692071ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:47:25.653699   36588 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220906154143-22187" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (491.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0906 15:47:28.946386   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:47:40.043154   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:47:41.128611   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:47:41.294175   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:47:46.253804   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:47:47.106263   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 15:48:08.813651   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m6.795526242s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Kubernetes 1.25.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.0
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220906154143-22187" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:47:27.724326   36618 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:47:27.724481   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724486   36618 out.go:309] Setting ErrFile to fd 2...
	I0906 15:47:27.724490   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724596   36618 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:47:27.725040   36618 out.go:303] Setting JSON to false
	I0906 15:47:27.740136   36618 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10018,"bootTime":1662494429,"procs":332,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:47:27.740244   36618 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:47:27.762199   36618 out.go:177] * [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:47:27.804151   36618 notify.go:193] Checking for updates...
	I0906 15:47:27.826250   36618 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:47:27.848207   36618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:27.874086   36618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:47:27.895101   36618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:47:27.916094   36618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:47:27.937719   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:27.960007   36618 out.go:177] * Kubernetes 1.25.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.0
	I0906 15:47:27.980813   36618 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:47:28.050338   36618 docker.go:137] docker version: linux-20.10.17
	I0906 15:47:28.050475   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.182336   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.123754068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.224979   36618 out.go:177] * Using the docker driver based on existing profile
	I0906 15:47:28.245671   36618 start.go:284] selected driver: docker
	I0906 15:47:28.245703   36618 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.245851   36618 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:47:28.249022   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.379018   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.322340605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.379175   36618 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:47:28.379194   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:28.379205   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:28.379215   36618 start_flags.go:310] config:
	{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.421686   36618 out.go:177] * Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	I0906 15:47:28.442547   36618 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:47:28.463689   36618 out.go:177] * Pulling base image ...
	I0906 15:47:28.506539   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:28.506550   36618 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:47:28.506598   36618 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:47:28.506611   36618 cache.go:57] Caching tarball of preloaded images
	I0906 15:47:28.506757   36618 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:47:28.506777   36618 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 15:47:28.507478   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:28.570394   36618 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:47:28.570413   36618 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:47:28.570424   36618 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:47:28.570474   36618 start.go:364] acquiring machines lock for old-k8s-version-20220906154143-22187: {Name:mkf6412c70024633cc757c4659ae827dd641d20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:47:28.570554   36618 start.go:368] acquired machines lock for "old-k8s-version-20220906154143-22187" in 63.129µs
	I0906 15:47:28.570574   36618 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:47:28.570584   36618 fix.go:55] fixHost starting: 
	I0906 15:47:28.570821   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:28.634799   36618 fix.go:103] recreateIfNeeded on old-k8s-version-20220906154143-22187: state=Stopped err=<nil>
	W0906 15:47:28.634825   36618 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:47:28.677667   36618 out.go:177] * Restarting existing docker container for "old-k8s-version-20220906154143-22187" ...
	I0906 15:47:28.698507   36618 cli_runner.go:164] Run: docker start old-k8s-version-20220906154143-22187
	I0906 15:47:29.031374   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:29.153450   36618 kic.go:415] container "old-k8s-version-20220906154143-22187" state is running.
	I0906 15:47:29.154026   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.222072   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:29.222435   36618 machine.go:88] provisioning docker machine ...
	I0906 15:47:29.222459   36618 ubuntu.go:169] provisioning hostname "old-k8s-version-20220906154143-22187"
	I0906 15:47:29.222536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.288956   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.289172   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.289186   36618 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220906154143-22187 && echo "old-k8s-version-20220906154143-22187" | sudo tee /etc/hostname
	I0906 15:47:29.409404   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220906154143-22187
	
	I0906 15:47:29.409506   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.474903   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.475053   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.475069   36618 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220906154143-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220906154143-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220906154143-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:47:29.588648   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:29.588669   36618 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:47:29.588700   36618 ubuntu.go:177] setting up certificates
	I0906 15:47:29.588721   36618 provision.go:83] configureAuth start
	I0906 15:47:29.588785   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.653294   36618 provision.go:138] copyHostCerts
	I0906 15:47:29.653379   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:47:29.653389   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:47:29.653484   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:47:29.653690   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:47:29.653700   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:47:29.653761   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:47:29.653906   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:47:29.653931   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:47:29.653991   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:47:29.654107   36618 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220906154143-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220906154143-22187]
	I0906 15:47:29.819591   36618 provision.go:172] copyRemoteCerts
	I0906 15:47:29.819655   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:47:29.819697   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.883624   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:29.965244   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:47:29.981832   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0906 15:47:29.998925   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:47:30.015333   36618 provision.go:86] duration metric: configureAuth took 426.595674ms
	I0906 15:47:30.015347   36618 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:47:30.015480   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:30.015536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.078928   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.079080   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.079097   36618 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:47:30.191405   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:47:30.191416   36618 ubuntu.go:71] root file system type: overlay
	I0906 15:47:30.191564   36618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:47:30.191653   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.257341   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.257518   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.257566   36618 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:47:30.378325   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:47:30.378415   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.442083   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.442233   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.442245   36618 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:47:30.558345   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:30.558369   36618 machine.go:91] provisioned docker machine in 1.335922482s
	I0906 15:47:30.558380   36618 start.go:300] post-start starting for "old-k8s-version-20220906154143-22187" (driver="docker")
	I0906 15:47:30.558385   36618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:47:30.558449   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:47:30.558496   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.623093   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.705359   36618 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:47:30.708767   36618 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:47:30.708781   36618 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:47:30.708788   36618 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:47:30.708793   36618 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:47:30.708801   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:47:30.708902   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:47:30.709047   36618 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:47:30.709191   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:47:30.716071   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:30.733188   36618 start.go:303] post-start completed in 174.799919ms
	I0906 15:47:30.733264   36618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:47:30.733307   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.797534   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.879275   36618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:47:30.883629   36618 fix.go:57] fixHost completed within 2.313039871s
	I0906 15:47:30.883640   36618 start.go:83] releasing machines lock for "old-k8s-version-20220906154143-22187", held for 2.313072798s
	I0906 15:47:30.883707   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948370   36618 ssh_runner.go:195] Run: systemctl --version
	I0906 15:47:30.948389   36618 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:47:30.948452   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948458   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:31.016338   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.016439   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.248577   36618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:47:31.259106   36618 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:47:31.259179   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:47:31.270476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:47:31.283021   36618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:47:31.353154   36618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:47:31.426585   36618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:47:31.501244   36618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:47:31.715701   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.753351   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.831581   36618 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0906 15:47:31.831765   36618 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220906154143-22187 dig +short host.docker.internal
	I0906 15:47:31.962726   36618 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:47:31.962882   36618 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:47:31.967458   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:31.977699   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.041454   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:32.041543   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.072812   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.072839   36618 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:47:32.072992   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.104153   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.104174   36618 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:47:32.104248   36618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:47:32.178837   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:32.178849   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:32.178864   36618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:47:32.178876   36618 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220906154143-22187 NodeName:old-k8s-version-20220906154143-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:47:32.178983   36618 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220906154143-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220906154143-22187
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:47:32.179051   36618 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220906154143-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:47:32.179104   36618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0906 15:47:32.186748   36618 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:47:32.186801   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:47:32.194237   36618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0906 15:47:32.207073   36618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:47:32.219494   36618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0906 15:47:32.231803   36618 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:47:32.235747   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:32.245191   36618 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187 for IP: 192.168.67.2
	I0906 15:47:32.245304   36618 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:47:32.245353   36618 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:47:32.245429   36618 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key
	I0906 15:47:32.245528   36618 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e
	I0906 15:47:32.245585   36618 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key
	I0906 15:47:32.245795   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:47:32.245830   36618 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:47:32.245842   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:47:32.245883   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:47:32.245913   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:47:32.245939   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:47:32.246002   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:32.246567   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:47:32.263431   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:47:32.280089   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:47:32.296976   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:47:32.313479   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:47:32.330881   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:47:32.347457   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:47:32.364209   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:47:32.381370   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:47:32.398376   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:47:32.415314   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:47:32.435759   36618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:47:32.448194   36618 ssh_runner.go:195] Run: openssl version
	I0906 15:47:32.453444   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:47:32.461315   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465115   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465156   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.470177   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:47:32.477357   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:47:32.486000   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490512   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490562   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.495831   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:47:32.503224   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:47:32.510979   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514699   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514745   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.519767   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:47:32.527226   36618 kubeadm.go:396] StartCluster: {Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:32.527360   36618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:32.556441   36618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:47:32.563997   36618 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:47:32.564011   36618 kubeadm.go:627] restartCluster start
	I0906 15:47:32.564056   36618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:47:32.571007   36618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:32.571067   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.636552   36618 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:32.636751   36618 kubeconfig.go:127] "old-k8s-version-20220906154143-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:47:32.637095   36618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:47:32.638467   36618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:47:32.646914   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.646978   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.655320   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:32.857447   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.857626   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.867436   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.055442   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.055550   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.064764   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.255502   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.255571   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.264739   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.457093   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.457154   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.466479   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.656960   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.657112   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.666024   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.855454   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.855536   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.865698   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.056197   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.056330   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.066451   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.255620   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.255698   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.265530   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.456233   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.456324   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.465752   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.657449   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.657577   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.667461   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.856463   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.856602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.867085   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.055895   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.056016   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.065978   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.257473   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.257650   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.268029   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.455491   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.455556   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.466826   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.657485   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.657645   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.667632   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.667642   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.667684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.675713   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.675723   36618 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:47:35.675732   36618 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:47:35.675789   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:35.705109   36618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:47:35.715429   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:47:35.723190   36618 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Sep  6 22:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Sep  6 22:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Sep  6 22:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Sep  6 22:43 /etc/kubernetes/scheduler.conf
	
	I0906 15:47:35.723254   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:47:35.730810   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:47:35.738212   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:47:35.745776   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:47:35.753962   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761363   36618 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761377   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:35.813510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.680895   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.890193   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.953067   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:37.007310   36618 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:47:37.007369   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:37.515752   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:38.017852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:38.517627   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.017853   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.516530   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.016953   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.516341   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.017684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.516454   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.017850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.516815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:43.015747   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:43.517836   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.017465   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.515754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.015795   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.515857   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.015952   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.515728   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.015825   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.515705   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.015772   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.516034   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.015789   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.516635   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.015757   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.015748   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.517724   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.016065   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.516074   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:53.016794   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:53.516769   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.015802   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.516398   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.015770   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.517646   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.016754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.517915   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.015874   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.517815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:58.015852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:58.516201   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.016002   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.515787   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.017830   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.516806   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.015847   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.516910   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.016851   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.517315   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:03.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:03.516678   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.017779   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.517538   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.016029   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.516024   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.016955   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.516680   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.017903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.517898   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:08.017766   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:08.516568   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.017963   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.516751   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.016603   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.515832   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.015880   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.015846   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.515867   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:13.015821   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:13.515843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.017921   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.015965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.516522   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.015903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.515800   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.015904   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.515890   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:18.016702   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:18.516309   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.015893   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.515844   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.015875   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.015861   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.515854   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.015816   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.515876   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.016575   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.516149   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.016188   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.515905   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.016602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.518008   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.016339   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.517230   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.016823   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.516887   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.017965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.517474   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.017430   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.518014   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.516342   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.017840   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.516300   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.016103   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.517934   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.015945   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.516276   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.016960   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.517486   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.018019   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.516988   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.018005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.516078   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:37.018027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:37.047583   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.047595   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:37.047651   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:37.076314   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.076326   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:37.076388   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:37.105746   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.105758   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:37.105817   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:37.133889   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.133902   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:37.133959   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:37.163122   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.163133   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:37.163190   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:37.191877   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.191889   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:37.191961   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:37.220968   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.220981   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:37.221041   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:37.249271   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.249284   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:37.249291   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:37.249297   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:37.289900   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:37.289914   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:37.301542   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:37.301557   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:37.353958   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:37.353972   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:37.353979   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:37.368054   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:37.368066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:39.423867   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055782714s)
	I0906 15:48:41.924165   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:42.016977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:42.047623   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.047635   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:42.047691   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:42.077331   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.077346   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:42.077407   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:42.107184   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.107199   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:42.107261   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:42.139027   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.139041   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:42.139107   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:42.175702   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.175713   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:42.175776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:42.205201   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.205215   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:42.205276   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:42.234618   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.234630   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:42.234693   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:42.263411   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.263423   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:42.263430   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:42.263436   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:42.303796   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:42.303810   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:42.315377   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:42.315391   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:42.369166   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:42.369179   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:42.369186   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:42.383742   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:42.383754   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:44.433916   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050143742s)
	I0906 15:48:46.934245   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:47.016004   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:47.046573   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.046585   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:47.046640   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:47.077019   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.077031   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:47.077092   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:47.107321   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.107334   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:47.107389   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:47.137709   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.137721   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:47.137777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:47.169281   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.169295   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:47.169355   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:47.197280   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.197292   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:47.197350   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:47.226913   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.226930   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:47.226989   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:47.257981   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.257992   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:47.258000   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:47.258006   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:49.312362   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054338446s)
	I0906 15:48:49.312470   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:49.312476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:49.351688   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:49.351702   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:49.363819   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:49.363836   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:49.415301   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:49.415311   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:49.415318   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:51.930431   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:52.016736   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:52.046820   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.046831   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:52.046886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:52.075587   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.075599   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:52.075657   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:52.105073   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.105085   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:52.105140   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:52.134789   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.134801   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:52.134864   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:52.162762   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.162782   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:52.162837   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:52.191879   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.191891   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:52.191962   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:52.221137   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.221149   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:52.221204   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:52.250240   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.250253   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:52.250259   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:52.250273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:52.290244   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:52.290261   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:52.301674   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:52.301688   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:52.353298   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:52.353309   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:52.353316   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:52.366721   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:52.366733   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:54.420553   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053802778s)
	I0906 15:48:56.923005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:57.018057   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:57.049543   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.049554   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:57.049612   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:57.078691   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.078706   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:57.078777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:57.108669   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.108686   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:57.108764   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:57.141982   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.141996   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:57.142054   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:57.172447   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.172459   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:57.172522   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:57.200955   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.200971   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:57.201030   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:57.229233   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.229245   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:57.229306   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:57.258367   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.258379   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:57.258386   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:57.258394   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:57.271869   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:57.271881   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:59.326190   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054291416s)
	I0906 15:48:59.326348   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:59.326355   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:59.367821   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:59.367839   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:59.379672   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:59.379685   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:59.432111   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:01.932831   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:02.018145   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:02.048231   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.048244   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:02.048299   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:02.077507   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.077520   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:02.077580   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:02.106702   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.106713   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:02.106771   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:02.135555   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.135567   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:02.135631   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:02.164516   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.164529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:02.164588   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:02.191790   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.191803   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:02.191862   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:02.220273   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.220286   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:02.220351   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:02.249683   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.249695   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:02.249702   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:02.249709   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:02.261264   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:02.261276   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:02.317306   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:02.317320   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:02.317326   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:02.333052   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:02.333066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:04.387574   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054488465s)
	I0906 15:49:04.387694   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:04.387705   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:06.928014   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:07.015920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:07.044778   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.044791   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:07.044847   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:07.076121   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.076133   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:07.076187   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:07.105220   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.105233   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:07.105295   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:07.135579   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.135592   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:07.135649   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:07.173144   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.173156   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:07.173217   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:07.201600   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.201611   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:07.201668   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:07.230545   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.230557   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:07.230612   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:07.261070   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.261082   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:07.261089   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:07.261099   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:07.272874   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:07.272894   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:07.325682   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:07.325698   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:07.325705   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:07.340738   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:07.340751   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:09.392387   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051618875s)
	I0906 15:49:09.392504   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:09.392513   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:11.933164   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:12.016510   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:12.053586   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.053599   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:12.053673   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:12.089329   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.089341   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:12.089396   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:12.119213   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.119224   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:12.119292   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:12.157922   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.157935   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:12.157990   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:12.188341   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.188354   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:12.188412   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:12.218459   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.218470   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:12.218525   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:12.249119   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.249131   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:12.249190   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:12.280793   36618 logs.go:274] 0 containers: []
	W0906 15:49:12.280809   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:12.280822   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:12.280831   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:12.321377   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:12.321388   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:12.334068   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:12.334082   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:12.406712   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:12.406728   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:12.406740   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:12.421675   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:12.421690   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:14.476631   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054920066s)
	I0906 15:49:16.977675   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:17.015945   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:17.046359   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.046372   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:17.046427   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:17.077883   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.077897   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:17.077954   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:17.110835   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.110847   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:17.110908   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:17.141051   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.141063   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:17.141121   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:17.173139   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.173151   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:17.173212   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:17.206114   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.206126   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:17.206180   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:17.240312   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.240325   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:17.240389   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:17.269870   36618 logs.go:274] 0 containers: []
	W0906 15:49:17.269887   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:17.269896   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:17.269905   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:17.327468   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:17.327487   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:17.327495   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:17.362329   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:17.362342   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:19.416403   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054042623s)
	I0906 15:49:19.416515   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:19.416523   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:19.459208   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:19.459224   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:21.971415   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:22.016015   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:22.046051   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.046066   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:22.046125   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:22.074723   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.074736   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:22.074797   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:22.109270   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.109283   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:22.109345   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:22.139467   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.139481   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:22.139540   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:22.171965   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.171977   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:22.172039   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:22.204597   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.204612   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:22.204668   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:22.238600   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.238613   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:22.238682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:22.276410   36618 logs.go:274] 0 containers: []
	W0906 15:49:22.276423   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:22.276431   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:22.276438   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:22.325230   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:22.325250   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:22.338599   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:22.338620   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:22.404695   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:22.404709   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:22.404718   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:22.422250   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:22.422268   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:24.488426   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066140768s)
	I0906 15:49:26.989652   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:27.016395   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:27.048063   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.048077   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:27.048148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:27.080672   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.080686   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:27.080743   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:27.113193   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.113205   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:27.113263   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:27.163737   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.163749   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:27.163811   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:27.195187   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.195200   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:27.195254   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:27.230380   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.230392   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:27.230439   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:27.262844   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.262856   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:27.262917   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:27.291934   36618 logs.go:274] 0 containers: []
	W0906 15:49:27.291946   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:27.291953   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:27.291960   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:29.352076   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060098816s)
	I0906 15:49:29.352192   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:29.352200   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:29.400930   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:29.400944   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:29.413598   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:29.413612   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:29.467672   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:29.467682   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:29.467689   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:31.984071   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:32.018150   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:32.052707   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.052726   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:32.052823   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:32.081543   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.081555   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:32.081624   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:32.116087   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.116098   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:32.116161   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:32.144782   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.144794   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:32.144852   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:32.174222   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.174234   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:32.174288   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:32.203608   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.203621   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:32.203683   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:32.231609   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.231622   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:32.231687   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:32.264379   36618 logs.go:274] 0 containers: []
	W0906 15:49:32.264391   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:32.264398   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:32.264405   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:32.278459   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:32.278472   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:34.331580   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053089247s)
	I0906 15:49:34.331691   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:34.331699   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:34.370954   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:34.370969   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:34.382621   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:34.382638   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:34.434268   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:36.935227   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:37.018148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:37.061932   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.061949   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:37.062026   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:37.095386   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.095400   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:37.095462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:37.124166   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.124177   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:37.124234   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:37.155681   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.155697   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:37.155767   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:37.204463   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.204476   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:37.204532   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:37.233566   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.233581   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:37.233641   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:37.275721   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.275736   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:37.275793   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:37.308367   36618 logs.go:274] 0 containers: []
	W0906 15:49:37.308380   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:37.308387   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:37.308394   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:37.384780   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:37.384793   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:37.384801   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:37.402308   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:37.402322   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:39.454749   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052409325s)
	I0906 15:49:39.454860   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:39.454867   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:39.493208   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:39.493222   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:42.005247   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:42.016071   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:42.046542   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.046555   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:42.046610   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:42.079292   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.079309   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:42.079369   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:42.111179   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.111194   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:42.111258   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:42.144022   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.144035   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:42.144090   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:42.174056   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.174069   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:42.174127   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:42.206217   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.206228   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:42.206285   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:42.235557   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.235569   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:42.235624   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:42.266322   36618 logs.go:274] 0 containers: []
	W0906 15:49:42.266339   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:42.266348   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:42.266357   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:42.305787   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:42.305805   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:42.317408   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:42.317422   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:42.375714   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:42.375723   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:42.375729   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:42.390435   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:42.390447   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:44.445566   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055101433s)
	I0906 15:49:46.946313   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:47.018223   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:47.049014   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.049026   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:47.049081   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:47.078394   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.078406   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:47.078462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:47.107975   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.107987   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:47.108040   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:47.137334   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.137346   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:47.137406   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:47.165623   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.165635   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:47.165692   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:47.195862   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.195874   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:47.195932   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:47.224900   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.224913   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:47.224997   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:47.255180   36618 logs.go:274] 0 containers: []
	W0906 15:49:47.255192   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:47.255200   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:47.255207   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:47.293657   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:47.293673   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:47.305983   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:47.305997   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:47.361668   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:47.361678   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:47.361685   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:47.378220   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:47.378232   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:49.432192   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053940038s)
	I0906 15:49:51.932502   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:52.017451   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:52.048480   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.048494   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:52.048551   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:52.077648   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.077661   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:52.077715   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:52.110802   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.110814   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:52.110871   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:52.140609   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.140624   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:52.140681   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:52.168811   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.168823   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:52.168882   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:52.198724   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.198736   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:52.198795   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:52.228110   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.228122   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:52.228178   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:52.257500   36618 logs.go:274] 0 containers: []
	W0906 15:49:52.257512   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:52.257519   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:52.257528   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:52.310252   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:52.310262   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:52.310267   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:52.324324   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:52.324337   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:54.376395   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052041457s)
	I0906 15:49:54.376506   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:54.376513   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:54.419780   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:54.419801   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:56.933238   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:57.017781   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:57.046101   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.046113   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:57.046171   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:57.093972   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.093989   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:57.094076   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:57.124439   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.124451   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:57.124511   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:57.152730   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.152743   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:57.152800   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:57.182554   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.182566   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:57.182622   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:57.212059   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.212071   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:57.212148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:57.240083   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.240094   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:57.240172   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:57.268472   36618 logs.go:274] 0 containers: []
	W0906 15:49:57.268486   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:57.268493   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:57.268501   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:57.308062   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:57.308080   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:57.320809   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:57.320821   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:57.373862   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:57.373873   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:57.373880   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:57.387272   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:57.387284   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:59.438037   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050734946s)
	I0906 15:50:01.938482   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:02.016565   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:02.045727   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.045739   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:02.045813   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:02.080869   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.080885   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:02.080958   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:02.120750   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.120762   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:02.120818   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:02.174231   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.174246   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:02.174311   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:02.208668   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.208680   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:02.208744   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:02.239427   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.239443   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:02.239500   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:02.276836   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.276849   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:02.276912   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:02.313783   36618 logs.go:274] 0 containers: []
	W0906 15:50:02.313795   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:02.313801   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:02.313809   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:02.361059   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:02.361078   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:02.373892   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:02.373907   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:02.428607   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:02.428616   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:02.428624   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:02.442549   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:02.442562   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:04.496511   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05393131s)
	I0906 15:50:06.997291   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:07.017251   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:07.047258   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.047270   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:07.047325   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:07.076982   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.076994   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:07.077049   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:07.106119   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.106131   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:07.106190   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:07.135181   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.135194   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:07.135253   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:07.164648   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.164660   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:07.164716   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:07.194721   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.194732   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:07.194788   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:07.224375   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.224387   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:07.224444   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:07.254548   36618 logs.go:274] 0 containers: []
	W0906 15:50:07.254562   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:07.254569   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:07.254575   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:07.268579   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:07.268592   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:09.321779   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053168628s)
	I0906 15:50:09.321886   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:09.321893   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:09.361062   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:09.361078   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:09.372439   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:09.372452   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:09.427140   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:11.927232   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:12.016138   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:12.054012   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.054031   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:12.054100   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:12.092318   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.092331   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:12.092388   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:12.131612   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.131623   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:12.131681   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:12.171215   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.171231   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:12.171331   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:12.202683   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.202697   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:12.202753   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:12.235170   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.235183   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:12.235240   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:12.274520   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.274532   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:12.274591   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:12.304912   36618 logs.go:274] 0 containers: []
	W0906 15:50:12.304924   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:12.304931   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:12.304939   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:12.344925   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:12.344947   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:12.356655   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:12.356669   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:12.433022   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:12.433033   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:12.433039   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:12.447951   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:12.447964   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:14.503992   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056009867s)
	I0906 15:50:17.006367   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:17.518326   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:17.550178   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.550191   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:17.550247   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:17.579095   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.579106   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:17.579164   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:17.607702   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.607714   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:17.607779   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:17.636578   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.636589   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:17.636645   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:17.666259   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.666271   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:17.666329   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:17.695063   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.695076   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:17.695154   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:17.724727   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.724740   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:17.724802   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:17.753409   36618 logs.go:274] 0 containers: []
	W0906 15:50:17.753422   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:17.753429   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:17.753436   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:17.765032   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:17.765044   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:17.816861   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:17.816877   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:17.816885   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:17.830595   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:17.830608   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:19.888088   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057462209s)
	I0906 15:50:19.888191   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:19.888199   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:22.430089   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:22.517676   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:22.547831   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.547843   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:22.547901   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:22.575946   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.575958   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:22.576017   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:22.606119   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.606131   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:22.606187   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:22.642504   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.642517   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:22.642572   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:22.677905   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.677918   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:22.677974   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:22.710454   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.710468   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:22.710523   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:22.741213   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.747765   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:22.747822   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:22.780713   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.780735   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:22.780743   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:22.780750   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:22.825622   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:22.825640   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:22.849234   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:22.849253   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:22.904336   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:22.904346   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:22.904353   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:22.917771   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:22.917784   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:24.972764   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054937206s)
	I0906 15:50:27.473071   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:27.516271   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:27.546171   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.546183   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:27.546241   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:27.576500   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.576511   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:27.576565   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:27.605881   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.605898   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:27.605968   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:27.634722   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.634737   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:27.634806   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:27.682458   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.682471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:27.682562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:27.715777   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.715790   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:27.715848   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:27.747228   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.747241   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:27.747297   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:27.779174   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.779190   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:27.779197   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:27.779206   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:27.794916   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:27.794934   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:29.852358   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057404132s)
	I0906 15:50:29.852500   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:29.852510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:29.890521   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:29.890535   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:29.901840   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:29.901851   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:29.954554   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.455578   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.518172   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:32.548482   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.548495   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:32.548562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:32.581388   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.581401   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:32.581462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:32.613423   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.613440   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:32.613516   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:32.646792   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.646806   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:32.646886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:32.679058   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.679070   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:32.679132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:32.706281   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.706294   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:32.706349   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:32.740556   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.745575   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:32.745632   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:32.775009   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.775021   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:32.775028   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:32.775035   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:32.815094   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:32.815109   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:32.827508   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:32.827521   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:32.892093   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.892116   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:32.892127   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:32.905761   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:32.905772   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:34.959908   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054119058s)
	I0906 15:50:37.461003   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:37.516871   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:37.552075   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.552087   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:37.552148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:37.588429   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.588444   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:37.588519   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:37.621349   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.621361   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:37.621443   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:37.653420   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.653435   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:37.653497   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:37.684456   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.684471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:37.684530   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:37.723554   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.723570   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:37.723702   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:37.763280   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.763293   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:37.763360   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:37.800010   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.800025   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:37.800033   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:37.800042   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:37.848311   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:37.848332   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:37.863600   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:37.863623   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:37.940260   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:37.940278   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:37.940317   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:37.957971   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:37.957982   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:40.011474   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053472357s)
	I0906 15:50:42.513826   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:43.018269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:43.046980   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.046992   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:43.047050   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:43.075170   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.075183   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:43.075237   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:43.104514   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.104526   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:43.104582   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:43.133882   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.133894   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:43.133953   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:43.162356   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.162368   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:43.162431   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:43.197634   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.197648   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:43.197714   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:43.229904   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.229916   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:43.229973   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:43.261120   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.261132   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:43.261140   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:43.261146   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:43.300082   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:43.300097   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:43.312225   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:43.312238   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:43.365232   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:43.365242   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:43.365249   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:43.380452   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:43.380465   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:45.435023   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054541052s)
	I0906 15:50:47.936850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:48.016371   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:48.047334   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.047346   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:48.047400   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:48.079442   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.079453   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:48.079507   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:48.107817   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.107829   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:48.107887   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:48.136570   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.136583   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:48.136641   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:48.165367   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.165380   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:48.165438   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:48.193686   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.193699   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:48.193758   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:48.222001   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.222015   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:48.222073   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:48.249978   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.249990   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:48.249998   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:48.250005   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:48.287143   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:48.287158   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:48.298409   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:48.298422   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:48.356790   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:48.356801   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:48.356815   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:48.370256   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:48.370268   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:50.421619   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051333533s)
	I0906 15:50:52.922613   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:53.016799   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:53.048909   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.048921   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:53.048980   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:53.077529   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.077542   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:53.077606   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:53.105518   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.105529   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:53.105586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:53.135007   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.135020   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:53.135079   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:53.163328   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.163341   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:53.163396   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:53.191132   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.191143   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:53.191199   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:53.219655   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.219668   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:53.219724   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:53.248534   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.248547   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:53.248554   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:53.248561   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:53.260251   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:53.260264   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:53.317573   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:53.317586   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:53.317592   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:53.332188   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:53.332202   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:55.385124   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052904546s)
	I0906 15:50:55.385230   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:55.385237   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:57.926420   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:58.017776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:58.047321   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.047333   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:58.047397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:58.075870   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.075882   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:58.075939   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:58.106804   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.106816   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:58.106874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:58.136263   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.136276   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:58.136333   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:58.165517   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.165529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:58.165586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:58.194182   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.194194   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:58.194249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:58.222862   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.222874   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:58.222942   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:58.254161   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.254174   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:58.254181   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:58.254192   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:58.307613   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:58.307626   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:58.307633   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:58.321788   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:58.321800   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:00.373491   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051674038s)
	I0906 15:51:00.373598   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:00.373605   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:00.412768   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:00.412783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:02.926085   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:03.016795   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:03.045519   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.045535   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:03.045594   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:03.077002   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.077014   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:03.077070   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:03.106731   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.106742   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:03.106803   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:03.137065   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.137078   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:03.137139   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:03.165960   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.165972   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:03.166031   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:03.194538   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.194552   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:03.194615   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:03.223613   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.223625   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:03.223692   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:03.252621   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.252634   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:03.252642   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:03.252649   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:03.293046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:03.293061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:03.305992   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:03.306004   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:03.359768   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:03.359777   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:03.359783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:03.374067   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:03.374080   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:05.428493   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054394661s)
	I0906 15:51:07.930843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:08.018364   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:08.050342   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.050356   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:08.050414   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:08.080802   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.080815   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:08.080874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:08.110557   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.110570   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:08.110626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:08.140588   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.140601   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:08.140658   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:08.171464   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.171477   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:08.171544   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:08.200615   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.200628   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:08.200684   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:08.231364   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.231376   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:08.231442   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:08.265358   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.265372   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:08.265379   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:08.265386   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:08.279229   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:08.279242   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:10.332629   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053369757s)
	I0906 15:51:10.332737   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:10.332744   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:10.371046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:10.371061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:10.382429   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:10.382441   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:10.434114   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:12.935172   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:13.016810   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:13.048233   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.048247   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:13.048307   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:13.076100   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.076112   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:13.076167   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:13.105312   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.105329   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:13.105397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:13.134422   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.134434   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:13.134509   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:13.163088   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.163100   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:13.163156   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:13.192169   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.192181   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:13.192249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:13.221272   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.221284   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:13.221342   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:13.249896   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.249907   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:13.249914   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:13.249921   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:13.261316   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:13.261328   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:13.316693   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:13.316704   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:13.316710   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:13.333605   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:13.333618   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:15.389543   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05590645s)
	I0906 15:51:15.389649   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:15.389657   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:17.929544   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:18.017317   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:18.049613   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.049625   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:18.049682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:18.078124   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.078137   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:18.078194   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:18.106846   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.106859   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:18.106916   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:18.136908   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.136920   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:18.136977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:18.165211   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.165223   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:18.165281   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:18.194317   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.194329   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:18.194387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:18.225530   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.225543   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:18.225602   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:18.254758   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.254770   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:18.254777   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:18.254783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:18.296280   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:18.296292   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:18.307948   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:18.307960   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:18.361906   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:18.361916   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:18.361922   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:18.376020   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:18.376033   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:20.430813   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054762389s)
	I0906 15:51:22.931094   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:23.016599   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:23.047383   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.047395   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:23.047452   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:23.076558   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.076570   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:23.076629   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:23.105158   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.105174   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:23.105249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:23.134903   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.134915   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:23.134970   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:23.163722   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.163737   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:23.163797   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:23.193082   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.193103   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:23.193179   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:23.223206   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.223218   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:23.223279   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:23.253242   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.253254   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:23.253264   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:23.253273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:23.269441   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:23.269454   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:25.324087   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054614433s)
	I0906 15:51:25.324197   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:25.324204   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:25.362495   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:25.362508   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:25.373850   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:25.373864   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:25.427416   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:27.927755   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:28.018461   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:28.049083   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.049096   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:28.049151   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:28.076915   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.076926   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:28.076984   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:28.105609   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.105624   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:28.105682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:28.135415   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.135427   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:28.135483   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:28.165044   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.165057   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:28.165117   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:28.194961   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.194972   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:28.195027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:28.224560   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.224572   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:28.224626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:28.253940   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.253953   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:28.253961   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:28.253970   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:28.293324   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:28.293338   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:28.304502   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:28.304515   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:28.358820   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:28.358831   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:28.358838   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:28.372433   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:28.372444   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:30.425146   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052684469s)
	I0906 15:51:32.927175   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:33.017341   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:33.048887   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.048900   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:33.048957   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:33.077441   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.077452   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:33.077514   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:33.106906   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.106919   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:33.106981   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:33.136315   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.136327   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:33.136384   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:33.164846   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.164859   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:33.164920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:33.210609   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.210620   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:33.210680   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:33.242201   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.242213   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:33.242269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:33.270214   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.270226   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:33.270233   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:33.270240   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:33.310549   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:33.310565   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:33.322387   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:33.322400   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:33.374793   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:33.374804   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:33.374812   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:33.388065   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:33.388077   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:35.437468   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04937256s)
	I0906 15:51:37.937790   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:37.948009   36618 kubeadm.go:631] restartCluster took 4m5.383312357s
	W0906 15:51:37.948093   36618 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0906 15:51:37.948113   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:51:38.373075   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:51:38.382614   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:51:38.390078   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:51:38.390124   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:51:38.397462   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:51:38.397491   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:51:38.444468   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:51:38.444514   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:51:38.751851   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:51:38.751951   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:51:38.752044   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:51:39.022935   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:51:39.023421   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:51:39.030200   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:51:39.096240   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:51:39.120068   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:51:39.120143   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:51:39.120223   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:51:39.120334   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:51:39.120397   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:51:39.120462   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:51:39.120529   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:51:39.120590   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:51:39.120645   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:51:39.120727   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:51:39.120792   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:51:39.120833   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:51:39.120892   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:51:39.515774   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:51:39.628999   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:51:39.816570   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:51:39.960203   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:51:39.960886   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:51:40.003202   36618 out.go:204]   - Booting up control plane ...
	I0906 15:51:40.003301   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:51:40.003379   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:51:40.003447   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:51:40.003511   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:51:40.003627   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:52:19.941067   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:52:19.941616   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:19.941780   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:24.939499   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:24.939741   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:34.933630   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:34.933937   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:54.920474   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:54.920618   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:53:34.893294   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:53:34.893561   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:53:34.893577   36618 kubeadm.go:317] 
	I0906 15:53:34.893622   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:53:34.893683   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:53:34.893694   36618 kubeadm.go:317] 
	I0906 15:53:34.893731   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:53:34.893787   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:53:34.893917   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:53:34.893925   36618 kubeadm.go:317] 
	I0906 15:53:34.894045   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:53:34.894099   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:53:34.894131   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:53:34.894142   36618 kubeadm.go:317] 
	I0906 15:53:34.894228   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:53:34.894312   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:53:34.894377   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:53:34.894411   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:53:34.894474   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:53:34.894503   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:53:34.897717   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:53:34.897844   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:53:34.897942   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:53:34.898018   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:53:34.898086   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:53:34.898216   36618 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:53:34.898243   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:53:35.322770   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:53:35.332350   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:53:35.332397   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:53:35.340038   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:53:35.340060   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:53:35.385462   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:53:35.385503   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:53:35.695132   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:53:35.695219   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:53:35.695302   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:53:35.979308   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:53:35.979962   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:53:35.986584   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:53:36.049897   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:53:36.071432   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:53:36.071511   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:53:36.071599   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:53:36.071705   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:53:36.071754   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:53:36.071836   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:53:36.071932   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:53:36.072028   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:53:36.072072   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:53:36.072132   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:53:36.072207   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:53:36.072239   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:53:36.072293   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:53:36.386098   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:53:36.481839   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:53:36.735962   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:53:36.848356   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:53:36.849031   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:53:36.870925   36618 out.go:204]   - Booting up control plane ...
	I0906 15:53:36.871084   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:53:36.871201   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:53:36.871311   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:53:36.871457   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:53:36.871744   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:54:16.829056   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:54:16.829917   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:16.830124   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:21.827690   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:21.827848   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:31.820981   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:31.821186   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:51.807304   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:51.807458   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:55:31.779661   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:55:31.779822   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:55:31.779830   36618 kubeadm.go:317] 
	I0906 15:55:31.779860   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:55:31.779889   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:55:31.779894   36618 kubeadm.go:317] 
	I0906 15:55:31.779921   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:55:31.779960   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:55:31.780052   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:55:31.780063   36618 kubeadm.go:317] 
	I0906 15:55:31.780169   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:55:31.780219   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:55:31.780247   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:55:31.780251   36618 kubeadm.go:317] 
	I0906 15:55:31.780328   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:55:31.780416   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:55:31.780495   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:55:31.780559   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:55:31.780661   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:55:31.780715   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:55:31.783923   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:55:31.784047   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:55:31.784168   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:55:31.784249   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:55:31.784306   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:55:31.784333   36618 kubeadm.go:398] StartCluster complete in 7m59.255788376s
	I0906 15:55:31.784406   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:55:31.816119   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.816135   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:55:31.816207   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:55:31.852948   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.852961   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:55:31.853021   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:55:31.884845   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.884856   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:55:31.884911   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:55:31.917054   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.917068   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:55:31.917132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:55:31.948382   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.948395   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:55:31.948451   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:55:31.982328   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.982339   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:55:31.982387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:55:32.013438   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.013450   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:55:32.013510   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:55:32.044826   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.044840   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:55:32.044847   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:55:32.044854   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:55:32.085941   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:55:32.085955   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:55:32.097748   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:55:32.097762   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:55:32.160044   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:55:32.160054   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:55:32.160060   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:55:32.174249   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:55:32.174260   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:55:34.234529   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060250655s)
	W0906 15:55:34.234640   36618 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:55:34.234654   36618 out.go:239] * 
	* 
	W0906 15:55:34.234769   36618 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.234800   36618 out.go:239] * 
	* 
	W0906 15:55:34.235311   36618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:55:34.299125   36618 out.go:177] 
	W0906 15:55:34.342220   36618 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.342329   36618 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:55:34.342385   36618 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:55:34.385240   36618 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220906154143-22187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:47:29.039125207Z",
	            "FinishedAt": "2022-09-06T22:47:26.139154051Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a2118a2c36e1b5c44aafe44f5808c04fdc08f7c9c97617d0abe3804e5920b4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59556"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a2118a2c36e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "b81530b6afb4e1c30b7c1e1d7bbcce0431a21d5b730d06b677fa03cd39f407d8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (412.700976ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25: (3.500194169s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187                     | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187                     | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220906152522-22187                    | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | kubenet-20220906152522-22187                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:42 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:45 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:50:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:50:23.383928   37212 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:50:23.384105   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384110   37212 out.go:309] Setting ErrFile to fd 2...
	I0906 15:50:23.384114   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384226   37212 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:50:23.384693   37212 out.go:303] Setting JSON to false
	I0906 15:50:23.400568   37212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10194,"bootTime":1662494429,"procs":338,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:50:23.400663   37212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:50:23.422701   37212 out.go:177] * [default-k8s-different-port-20220906154915-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:50:23.444975   37212 notify.go:193] Checking for updates...
	I0906 15:50:23.466707   37212 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:50:23.488647   37212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:23.509671   37212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:50:23.530748   37212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:50:23.552752   37212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:50:23.575417   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:23.576052   37212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:50:23.643500   37212 docker.go:137] docker version: linux-20.10.17
	I0906 15:50:23.643647   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.772962   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.713734774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.816484   37212 out.go:177] * Using the docker driver based on existing profile
	I0906 15:50:23.837697   37212 start.go:284] selected driver: docker
	I0906 15:50:23.837744   37212 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port
-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:23.837918   37212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:50:23.841270   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.972563   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.911532634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.972720   37212 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:50:23.972740   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:23.972752   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:23.972759   37212 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:24.016175   37212 out.go:177] * Starting control plane node default-k8s-different-port-20220906154915-22187 in cluster default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.037370   37212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:50:24.058389   37212 out.go:177] * Pulling base image ...
	I0906 15:50:24.100618   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:24.100693   37212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:50:24.100700   37212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:50:24.100731   37212 cache.go:57] Caching tarball of preloaded images
	I0906 15:50:24.100971   37212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:50:24.100991   37212 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:50:24.102052   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.177644   37212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:50:24.177679   37212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:50:24.177695   37212 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:50:24.177751   37212 start.go:364] acquiring machines lock for default-k8s-different-port-20220906154915-22187: {Name:mke86da387e8e60d201d2bf660ca2b291cded1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:50:24.177833   37212 start.go:368] acquired machines lock for "default-k8s-different-port-20220906154915-22187" in 64.558µs
	I0906 15:50:24.177857   37212 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:50:24.177868   37212 fix.go:55] fixHost starting: 
	I0906 15:50:24.178075   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.241080   37212 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220906154915-22187: state=Stopped err=<nil>
	W0906 15:50:24.241106   37212 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:50:24.289728   37212 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220906154915-22187" ...
	I0906 15:50:24.310938   37212 cli_runner.go:164] Run: docker start default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.652464   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.717004   37212 kic.go:415] container "default-k8s-different-port-20220906154915-22187" state is running.
	I0906 15:50:24.717609   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.788739   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.789155   37212 machine.go:88] provisioning docker machine ...
	I0906 15:50:24.789182   37212 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220906154915-22187"
	I0906 15:50:24.789253   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.857628   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:24.857848   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:24.857870   37212 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220906154915-22187 && echo "default-k8s-different-port-20220906154915-22187" | sudo tee /etc/hostname
	I0906 15:50:24.982000   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220906154915-22187
	
	I0906 15:50:24.982089   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.047360   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.047575   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.047593   37212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220906154915-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220906154915-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220906154915-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:50:25.159181   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:25.159203   37212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:50:25.159232   37212 ubuntu.go:177] setting up certificates
	I0906 15:50:25.159243   37212 provision.go:83] configureAuth start
	I0906 15:50:25.159305   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.227062   37212 provision.go:138] copyHostCerts
	I0906 15:50:25.227183   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:50:25.227193   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:50:25.227287   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:50:25.227513   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:50:25.227523   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:50:25.227599   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:50:25.227736   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:50:25.227742   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:50:25.227797   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:50:25.227954   37212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220906154915-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220906154915-22187]
	I0906 15:50:25.387707   37212 provision.go:172] copyRemoteCerts
	I0906 15:50:25.387773   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:50:25.387820   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.453896   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:25.538722   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:50:25.559997   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0906 15:50:25.578754   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:50:25.599062   37212 provision.go:86] duration metric: configureAuth took 439.804217ms
	I0906 15:50:25.599076   37212 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:50:25.599255   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:25.599313   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.664450   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.664592   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.664602   37212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:50:25.777980   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:50:25.777993   37212 ubuntu.go:71] root file system type: overlay
	I0906 15:50:25.778137   37212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:50:25.778210   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.842319   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.842469   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.842532   37212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:50:25.964564   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:50:25.964654   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.028806   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:26.028945   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:26.028959   37212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:50:26.145650   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:26.145668   37212 machine.go:91] provisioned docker machine in 1.356498564s
	I0906 15:50:26.145678   37212 start.go:300] post-start starting for "default-k8s-different-port-20220906154915-22187" (driver="docker")
	I0906 15:50:26.145685   37212 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:50:26.145738   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:50:26.145781   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.214583   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.297685   37212 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:50:26.301530   37212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:50:26.301546   37212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:50:26.301553   37212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:50:26.301557   37212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:50:26.301567   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:50:26.301695   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:50:26.301841   37212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:50:26.301982   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:50:26.309414   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:26.326489   37212 start.go:303] post-start completed in 180.79968ms
	I0906 15:50:26.326571   37212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:50:26.326625   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.391005   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.472459   37212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:50:26.476963   37212 fix.go:57] fixHost completed within 2.299088562s
	I0906 15:50:26.476980   37212 start.go:83] releasing machines lock for "default-k8s-different-port-20220906154915-22187", held for 2.299131722s
	I0906 15:50:26.477075   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543830   37212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:50:26.543849   37212 ssh_runner.go:195] Run: systemctl --version
	I0906 15:50:26.543919   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543933   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.610348   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.610521   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.738898   37212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:50:26.748821   37212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:50:26.748877   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:50:26.760220   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:50:26.772960   37212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:50:26.840012   37212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:50:26.910847   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:26.983057   37212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:50:27.222145   37212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:50:27.292399   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:27.361398   37212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:50:27.370829   37212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:50:27.370897   37212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:50:27.374774   37212 start.go:471] Will wait 60s for crictl version
	I0906 15:50:27.374820   37212 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:50:27.478851   37212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:50:27.478919   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:27.513172   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:22.741213   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.747765   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:22.747822   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:22.780713   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.780735   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:22.780743   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:22.780750   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:22.825622   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:22.825640   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:22.849234   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:22.849253   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:22.904336   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:22.904346   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:22.904353   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:22.917771   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:22.917784   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:24.972764   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054937206s)
	I0906 15:50:27.473071   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:27.516271   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:27.546171   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.546183   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:27.546241   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:27.576500   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.576511   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:27.576565   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:27.605881   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.605898   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:27.605968   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:27.634722   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.634737   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:27.634806   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:27.682458   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.682471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:27.682562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:27.715777   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.715790   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:27.715848   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:27.573367   37212 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:50:27.573443   37212 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220906154915-22187 dig +short host.docker.internal
	I0906 15:50:27.702910   37212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:50:27.703141   37212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:50:27.707491   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.718288   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:27.784455   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:27.784543   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.816064   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.816080   37212 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:50:27.816149   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.847540   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.847561   37212 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:50:27.847634   37212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:50:27.921264   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:27.921277   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:27.921293   37212 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:50:27.921305   37212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220906154915-22187 NodeName:default-k8s-different-port-20220906154915-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:50:27.921421   37212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220906154915-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:50:27.921503   37212 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220906154915-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0906 15:50:27.921560   37212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:50:27.928695   37212 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:50:27.928754   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:50:27.935705   37212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0906 15:50:27.947621   37212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:50:27.959675   37212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0906 15:50:27.971770   37212 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:50:27.975353   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.984747   37212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187 for IP: 192.168.76.2
	I0906 15:50:27.984863   37212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:50:27.984928   37212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:50:27.985007   37212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.key
	I0906 15:50:27.985064   37212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key.31bdca25
	I0906 15:50:27.985114   37212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key
	I0906 15:50:27.985323   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:50:27.985358   37212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:50:27.985366   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:50:27.985406   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:50:27.985436   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:50:27.985463   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:50:27.985530   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:27.986135   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:50:28.002943   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 15:50:28.019502   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:50:28.036140   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:50:28.052467   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:50:28.068669   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:50:28.085037   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:50:28.101413   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:50:28.117752   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:50:28.134563   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:50:28.151206   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:50:28.167822   37212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:50:28.179908   37212 ssh_runner.go:195] Run: openssl version
	I0906 15:50:28.185084   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:50:28.192667   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196560   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196608   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.201652   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:50:28.208974   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:50:28.216562   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220441   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220490   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.225402   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:50:28.232504   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:50:28.240088   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243702   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243751   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.248732   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:50:28.255841   37212 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-2218
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:28.255949   37212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:28.284221   37212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:50:28.291767   37212 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:50:28.291783   37212 kubeadm.go:627] restartCluster start
	I0906 15:50:28.291828   37212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:50:28.298403   37212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.298458   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:28.362342   37212 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220906154915-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:28.362504   37212 kubeconfig.go:127] "default-k8s-different-port-20220906154915-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:50:28.362854   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:50:28.364281   37212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:50:28.371727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.371785   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.380211   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:27.747228   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.747241   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:27.747297   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:27.779174   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.779190   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:27.779197   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:27.779206   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:27.794916   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:27.794934   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:29.852358   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057404132s)
	I0906 15:50:29.852500   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:29.852510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:29.890521   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:29.890535   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:29.901840   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:29.901851   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:29.954554   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.455578   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.518172   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:32.548482   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.548495   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:32.548562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:32.581388   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.581401   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:32.581462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:32.613423   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.613440   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:32.613516   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:32.646792   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.646806   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:32.646886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:32.679058   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.679070   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:32.679132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:32.706281   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.706294   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:32.706349   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:28.580493   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.580582   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.588946   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.782354   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.782515   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.792901   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.980326   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.980414   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.990348   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.180465   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.180555   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.190991   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.380727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.380854   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.391256   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.582341   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.582484   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.592874   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.782426   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.782560   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.792358   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.980694   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.980808   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.991278   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.180949   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.181077   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.190362   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.380565   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.380676   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.390714   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.581547   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.581695   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.591408   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.781668   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.781744   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.792474   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.982446   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.982554   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.992872   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.182373   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.182496   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.193523   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.382353   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.382500   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.392561   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.392570   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.392611   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.400629   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.400643   37212 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:50:31.400653   37212 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:50:31.400714   37212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:31.431389   37212 docker.go:443] Stopping containers: [445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2]
	I0906 15:50:31.431462   37212 ssh_runner.go:195] Run: docker stop 445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2
	I0906 15:50:31.460862   37212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:50:31.471093   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:50:31.478456   37212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep  6 22:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:49 /etc/kubernetes/scheduler.conf
	
	I0906 15:50:31.478500   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 15:50:31.485784   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 15:50:31.493288   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.500416   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.500477   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.507449   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 15:50:31.515558   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.515611   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:50:31.523180   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530863   37212 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530878   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:31.576875   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.388889   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.520033   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.572876   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.645195   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:50:32.645266   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.159857   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.740556   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.745575   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:32.745632   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:32.775009   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.775021   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:32.775028   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:32.775035   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:32.815094   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:32.815109   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:32.827508   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:32.827521   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:32.892093   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.892116   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:32.892127   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:32.905761   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:32.905772   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:34.959908   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054119058s)
	I0906 15:50:37.461003   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:37.516871   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:37.552075   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.552087   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:37.552148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:37.588429   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.588444   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:37.588519   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:37.621349   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.621361   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:37.621443   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:37.653420   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.653435   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:37.653497   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:37.684456   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.684471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:37.684530   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:37.723554   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.723570   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:37.723702   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:33.657999   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.704057   37212 api_server.go:71] duration metric: took 1.058865284s to wait for apiserver process to appear ...
	I0906 15:50:33.704096   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:50:33.704112   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:33.705208   37212 api_server.go:256] stopped: https://127.0.0.1:59719/healthz: Get "https://127.0.0.1:59719/healthz": EOF
	I0906 15:50:34.205313   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.341332   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:50:36.341358   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:50:36.705764   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.711926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:36.711938   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.205327   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.211521   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:37.211533   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.705372   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.710926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:50:37.717670   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:50:37.717683   37212 api_server.go:130] duration metric: took 4.013570504s to wait for apiserver health ...
	I0906 15:50:37.717690   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:37.717696   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:37.717709   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:50:37.726280   37212 system_pods.go:59] 8 kube-system pods found
	I0906 15:50:37.726299   37212 system_pods.go:61] "coredns-565d847f94-wkvwz" [31b21348-6685-429e-8101-a138d6f44c5a] Running
	I0906 15:50:37.726311   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [06c9eba4-2eb0-4b4a-8923-14badd5235b3] Running
	I0906 15:50:37.726324   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [81942a28-8b69-4b86-80be-4c3d54e8c71e] Running
	I0906 15:50:37.726333   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [c814ed45-a563-4476-adc1-e14de96156f8] Running
	I0906 15:50:37.726343   37212 system_pods.go:61] "kube-proxy-t7vx8" [019bd2fb-a0da-477f-9df3-74757d6d787d] Running
	I0906 15:50:37.726356   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [9434ace8-3845-48cc-8fff-67183116a1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:50:37.726364   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-wnhzc" [23e9d7cc-1aca-4e2e-8ea9-ba6493231ca0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:50:37.726368   37212 system_pods.go:61] "storage-provisioner" [54518a3e-e36f-4f53-b169-0a62c4eabd66] Running
	I0906 15:50:37.726372   37212 system_pods.go:74] duration metric: took 8.658942ms to wait for pod list to return data ...
	I0906 15:50:37.726378   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:50:37.729378   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:50:37.729393   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:50:37.729406   37212 node_conditions.go:105] duration metric: took 3.024346ms to run NodePressure ...
	I0906 15:50:37.729419   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:37.929739   37212 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937679   37212 kubeadm.go:778] kubelet initialised
	I0906 15:50:37.937700   37212 kubeadm.go:779] duration metric: took 7.945238ms waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937713   37212 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:50:37.946600   37212 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953168   37212 pod_ready.go:92] pod "coredns-565d847f94-wkvwz" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.953178   37212 pod_ready.go:81] duration metric: took 6.561071ms waiting for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953187   37212 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996891   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.996900   37212 pod_ready.go:81] duration metric: took 43.709214ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996907   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002735   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.002745   37212 pod_ready.go:81] duration metric: took 5.833437ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002752   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120788   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.120798   37212 pod_ready.go:81] duration metric: took 118.040762ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120805   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.763280   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.763293   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:37.763360   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:37.800010   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.800025   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:37.800033   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:37.800042   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:37.848311   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:37.848332   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:37.863600   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:37.863623   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:37.940260   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:37.940278   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:37.940317   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:37.957971   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:37.957982   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:40.011474   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053472357s)
	I0906 15:50:42.513826   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:38.521134   37212 pod_ready.go:92] pod "kube-proxy-t7vx8" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.521144   37212 pod_ready.go:81] duration metric: took 400.332006ms waiting for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.521150   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:40.932176   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:43.018269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:43.046980   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.046992   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:43.047050   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:43.075170   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.075183   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:43.075237   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:43.104514   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.104526   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:43.104582   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:43.133882   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.133894   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:43.133953   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:43.162356   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.162368   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:43.162431   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:43.197634   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.197648   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:43.197714   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:43.229904   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.229916   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:43.229973   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:43.261120   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.261132   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:43.261140   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:43.261146   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:43.300082   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:43.300097   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:43.312225   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:43.312238   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:43.365232   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:43.365242   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:43.365249   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:43.380452   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:43.380465   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:45.435023   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054541052s)
	I0906 15:50:43.431899   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:45.931027   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:47.430333   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:47.430345   37212 pod_ready.go:81] duration metric: took 8.909165101s waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.430351   37212 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.936850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:48.016371   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:48.047334   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.047346   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:48.047400   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:48.079442   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.079453   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:48.079507   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:48.107817   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.107829   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:48.107887   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:48.136570   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.136583   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:48.136641   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:48.165367   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.165380   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:48.165438   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:48.193686   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.193699   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:48.193758   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:48.222001   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.222015   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:48.222073   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:48.249978   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.249990   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:48.249998   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:48.250005   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:48.287143   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:48.287158   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:48.298409   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:48.298422   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:48.356790   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:48.356801   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:48.356815   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:48.370256   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:48.370268   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:50.421619   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051333533s)
	I0906 15:50:49.443659   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:51.942260   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:52.922613   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:53.016799   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:53.048909   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.048921   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:53.048980   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:53.077529   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.077542   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:53.077606   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:53.105518   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.105529   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:53.105586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:53.135007   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.135020   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:53.135079   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:53.163328   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.163341   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:53.163396   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:53.191132   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.191143   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:53.191199   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:53.219655   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.219668   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:53.219724   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:53.248534   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.248547   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:53.248554   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:53.248561   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:53.260251   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:53.260264   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:53.317573   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:53.317586   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:53.317592   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:53.332188   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:53.332202   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:55.385124   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052904546s)
	I0906 15:50:55.385230   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:55.385237   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:53.942333   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:55.942494   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:57.926420   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:58.017776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:58.047321   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.047333   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:58.047397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:58.075870   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.075882   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:58.075939   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:58.106804   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.106816   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:58.106874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:58.136263   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.136276   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:58.136333   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:58.165517   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.165529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:58.165586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:58.194182   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.194194   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:58.194249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:58.222862   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.222874   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:58.222942   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:58.254161   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.254174   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:58.254181   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:58.254192   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:58.307613   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:58.307626   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:58.307633   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:58.321788   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:58.321800   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:00.373491   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051674038s)
	I0906 15:51:00.373598   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:00.373605   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:00.412768   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:00.412783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:58.442534   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:00.942919   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:02.926085   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:03.016795   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:03.045519   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.045535   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:03.045594   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:03.077002   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.077014   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:03.077070   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:03.106731   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.106742   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:03.106803   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:03.137065   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.137078   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:03.137139   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:03.165960   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.165972   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:03.166031   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:03.194538   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.194552   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:03.194615   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:03.223613   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.223625   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:03.223692   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:03.252621   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.252634   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:03.252642   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:03.252649   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:03.293046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:03.293061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:03.305992   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:03.306004   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:03.359768   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:03.359777   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:03.359783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:03.374067   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:03.374080   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:05.428493   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054394661s)
	I0906 15:51:03.440923   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:05.940922   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:07.930843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:08.018364   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:08.050342   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.050356   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:08.050414   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:08.080802   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.080815   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:08.080874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:08.110557   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.110570   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:08.110626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:08.140588   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.140601   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:08.140658   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:08.171464   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.171477   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:08.171544   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:08.200615   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.200628   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:08.200684   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:08.231364   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.231376   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:08.231442   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:08.265358   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.265372   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:08.265379   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:08.265386   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:08.279229   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:08.279242   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:10.332629   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053369757s)
	I0906 15:51:10.332737   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:10.332744   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:10.371046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:10.371061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:10.382429   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:10.382441   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:10.434114   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:08.442971   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:10.943493   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:12.935172   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:13.016810   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:13.048233   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.048247   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:13.048307   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:13.076100   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.076112   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:13.076167   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:13.105312   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.105329   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:13.105397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:13.134422   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.134434   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:13.134509   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:13.163088   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.163100   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:13.163156   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:13.192169   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.192181   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:13.192249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:13.221272   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.221284   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:13.221342   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:13.249896   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.249907   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:13.249914   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:13.249921   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:13.261316   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:13.261328   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:13.316693   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:13.316704   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:13.316710   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:13.333605   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:13.333618   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:15.389543   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05590645s)
	I0906 15:51:15.389649   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:15.389657   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:13.441127   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:15.442305   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.940913   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.929544   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:18.017317   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:18.049613   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.049625   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:18.049682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:18.078124   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.078137   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:18.078194   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:18.106846   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.106859   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:18.106916   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:18.136908   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.136920   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:18.136977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:18.165211   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.165223   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:18.165281   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:18.194317   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.194329   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:18.194387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:18.225530   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.225543   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:18.225602   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:18.254758   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.254770   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:18.254777   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:18.254783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:18.296280   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:18.296292   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:18.307948   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:18.307960   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:18.361906   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:18.361916   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:18.361922   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:18.376020   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:18.376033   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:20.430813   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054762389s)
	I0906 15:51:19.942784   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.441622   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.931094   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:23.016599   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:23.047383   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.047395   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:23.047452   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:23.076558   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.076570   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:23.076629   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:23.105158   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.105174   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:23.105249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:23.134903   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.134915   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:23.134970   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:23.163722   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.163737   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:23.163797   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:23.193082   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.193103   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:23.193179   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:23.223206   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.223218   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:23.223279   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:23.253242   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.253254   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:23.253264   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:23.253273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:23.269441   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:23.269454   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:25.324087   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054614433s)
	I0906 15:51:25.324197   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:25.324204   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:25.362495   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:25.362508   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:25.373850   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:25.373864   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:25.427416   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:24.443600   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:26.943789   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:27.927755   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:28.018461   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:28.049083   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.049096   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:28.049151   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:28.076915   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.076926   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:28.076984   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:28.105609   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.105624   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:28.105682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:28.135415   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.135427   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:28.135483   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:28.165044   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.165057   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:28.165117   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:28.194961   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.194972   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:28.195027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:28.224560   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.224572   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:28.224626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:28.253940   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.253953   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:28.253961   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:28.253970   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:28.293324   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:28.293338   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:28.304502   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:28.304515   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:28.358820   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:28.358831   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:28.358838   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:28.372433   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:28.372444   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:30.425146   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052684469s)
	I0906 15:51:29.442830   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:31.940449   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:32.927175   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:33.017341   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:33.048887   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.048900   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:33.048957   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:33.077441   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.077452   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:33.077514   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:33.106906   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.106919   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:33.106981   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:33.136315   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.136327   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:33.136384   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:33.164846   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.164859   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:33.164920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:33.210609   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.210620   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:33.210680   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:33.242201   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.242213   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:33.242269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:33.270214   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.270226   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:33.270233   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:33.270240   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:33.310549   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:33.310565   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:33.322387   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:33.322400   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:33.374793   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:33.374804   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:33.374812   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:33.388065   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:33.388077   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:35.437468   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04937256s)
	I0906 15:51:33.941085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:36.442094   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:37.937790   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:37.948009   36618 kubeadm.go:631] restartCluster took 4m5.383312357s
	W0906 15:51:37.948093   36618 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0906 15:51:37.948113   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:51:38.373075   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:51:38.382614   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:51:38.390078   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:51:38.390124   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:51:38.397462   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:51:38.397491   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:51:38.444468   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:51:38.444514   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:51:38.751851   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:51:38.751951   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:51:38.752044   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:51:39.022935   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:51:39.023421   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:51:39.030200   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:51:39.096240   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:51:39.120068   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:51:39.120143   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:51:39.120223   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:51:39.120334   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:51:39.120397   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:51:39.120462   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:51:39.120529   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:51:39.120590   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:51:39.120645   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:51:39.120727   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:51:39.120792   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:51:39.120833   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:51:39.120892   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:51:39.515774   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:51:39.628999   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:51:39.816570   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:51:39.960203   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:51:39.960886   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:51:40.003202   36618 out.go:204]   - Booting up control plane ...
	I0906 15:51:40.003301   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:51:40.003379   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:51:40.003447   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:51:40.003511   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:51:40.003627   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:51:38.941689   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:41.443572   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:43.941795   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:46.441286   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:48.941320   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:51.442966   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:53.940073   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:55.940873   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:57.943480   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:00.441774   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:02.940658   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:04.940941   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:06.943633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:09.443762   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:11.940301   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:13.941452   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:15.941955   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:19.941067   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:52:19.941616   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:19.941780   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:18.443378   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:20.941205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:22.944080   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:24.939499   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:24.939741   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:25.440548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:27.441072   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:29.940396   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:31.942049   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:34.933630   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:34.933937   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:33.942419   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:36.444518   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:38.941160   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:40.941401   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:42.942085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:45.442441   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:47.940847   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:49.943953   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:52.441492   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:54.920474   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:54.920618   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:54.940040   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:56.941544   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:58.943275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:01.440638   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:03.441633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:05.940226   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:07.941507   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:10.440810   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:12.440867   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:14.441996   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:16.943539   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:19.441181   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:21.443341   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:23.443498   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:25.942678   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:27.943717   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:30.442290   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:32.941144   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:34.893294   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:53:34.893561   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:53:34.893577   36618 kubeadm.go:317] 
	I0906 15:53:34.893622   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:53:34.893683   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:53:34.893694   36618 kubeadm.go:317] 
	I0906 15:53:34.893731   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:53:34.893787   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:53:34.893917   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:53:34.893925   36618 kubeadm.go:317] 
	I0906 15:53:34.894045   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:53:34.894099   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:53:34.894131   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:53:34.894142   36618 kubeadm.go:317] 
	I0906 15:53:34.894228   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:53:34.894312   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:53:34.894377   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:53:34.894411   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:53:34.894474   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:53:34.894503   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:53:34.897717   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:53:34.897844   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:53:34.897942   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:53:34.898018   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:53:34.898086   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:53:34.898216   36618 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:53:34.898243   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:53:35.322770   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:53:35.332350   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:53:35.332397   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:53:35.340038   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:53:35.340060   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:53:35.385462   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:53:35.385503   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:53:35.695132   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:53:35.695219   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:53:35.695302   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:53:35.979308   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:53:35.979962   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:53:35.986584   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:53:36.049897   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:53:36.071432   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:53:36.071511   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:53:36.071599   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:53:36.071705   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:53:36.071754   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:53:36.071836   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:53:36.071932   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:53:36.072028   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:53:36.072072   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:53:36.072132   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:53:36.072207   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:53:36.072239   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:53:36.072293   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:53:36.386098   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:53:36.481839   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:53:36.735962   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:53:36.848356   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:53:36.849031   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:53:36.870925   36618 out.go:204]   - Booting up control plane ...
	I0906 15:53:36.871084   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:53:36.871201   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:53:36.871311   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:53:36.871457   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:53:36.871744   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:53:35.440714   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:37.441318   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:39.441654   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:41.442159   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:43.940095   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:45.940829   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:47.941618   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:50.441918   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:52.940878   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:54.943528   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:56.943592   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:59.442374   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:01.443183   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:03.944275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:06.442342   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:08.942198   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:11.442663   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:16.829056   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:54:16.829917   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:16.830124   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:13.444236   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:15.941133   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:17.942335   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:21.827690   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:21.827848   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:20.442403   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:22.941548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:24.942579   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:27.441632   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.820981   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:31.821186   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:29.444387   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.942340   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:34.441535   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:36.442205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:38.943078   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:41.441772   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:43.940702   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:45.941793   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:47.436849   37212 pod_ready.go:81] duration metric: took 4m0.005822558s waiting for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	E0906 15:54:47.436870   37212 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 15:54:47.436887   37212 pod_ready.go:38] duration metric: took 4m9.498472217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:54:47.436919   37212 kubeadm.go:631] restartCluster took 4m19.144412803s
	W0906 15:54:47.437043   37212 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 15:54:47.437069   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 15:54:51.743270   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.306176563s)
	I0906 15:54:51.743330   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:54:51.752980   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:54:51.760278   37212 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:54:51.760326   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:54:51.767387   37212 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:54:51.767414   37212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:54:51.808770   37212 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 15:54:51.808802   37212 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:54:51.904557   37212 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:54:51.904648   37212 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:54:51.904725   37212 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:54:52.025732   37212 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:54:52.050514   37212 out.go:204]   - Generating certificates and keys ...
	I0906 15:54:52.050582   37212 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:54:52.050668   37212 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:54:52.050742   37212 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:54:52.050789   37212 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:54:52.050842   37212 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:54:52.050887   37212 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:54:52.050939   37212 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:54:52.050986   37212 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:54:52.051056   37212 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:54:52.051129   37212 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:54:52.051161   37212 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:54:52.051204   37212 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:54:52.104655   37212 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:54:52.266933   37212 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:54:52.455099   37212 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:54:52.599889   37212 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:54:52.611289   37212 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:54:52.611867   37212 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:54:52.611907   37212 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:54:52.691695   37212 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:54:51.807304   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:51.807458   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:52.713079   37212 out.go:204]   - Booting up control plane ...
	I0906 15:54:52.713174   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:54:52.713236   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:54:52.713297   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:54:52.713374   37212 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:54:52.713513   37212 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:54:58.196526   37212 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503547 seconds
	I0906 15:54:58.196654   37212 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 15:54:58.203434   37212 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 15:54:58.718698   37212 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 15:54:58.718859   37212 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220906154915-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 15:54:59.224635   37212 kubeadm.go:317] [bootstrap-token] Using token: g5os1h.xfjbuvdd1xawa0ky
	I0906 15:54:59.261788   37212 out.go:204]   - Configuring RBAC rules ...
	I0906 15:54:59.262049   37212 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 15:54:59.262337   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 15:54:59.268841   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 15:54:59.270852   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 15:54:59.272955   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 15:54:59.274702   37212 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 15:54:59.281328   37212 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 15:54:59.432647   37212 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 15:54:59.632647   37212 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 15:54:59.633705   37212 kubeadm.go:317] 
	I0906 15:54:59.633803   37212 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 15:54:59.633816   37212 kubeadm.go:317] 
	I0906 15:54:59.633881   37212 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 15:54:59.633888   37212 kubeadm.go:317] 
	I0906 15:54:59.633907   37212 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 15:54:59.633950   37212 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 15:54:59.633984   37212 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 15:54:59.633989   37212 kubeadm.go:317] 
	I0906 15:54:59.634058   37212 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 15:54:59.634067   37212 kubeadm.go:317] 
	I0906 15:54:59.634138   37212 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 15:54:59.634148   37212 kubeadm.go:317] 
	I0906 15:54:59.634185   37212 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 15:54:59.634235   37212 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 15:54:59.634291   37212 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 15:54:59.634298   37212 kubeadm.go:317] 
	I0906 15:54:59.634350   37212 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 15:54:59.634399   37212 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 15:54:59.634404   37212 kubeadm.go:317] 
	I0906 15:54:59.634457   37212 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634532   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 15:54:59.634554   37212 kubeadm.go:317] 	--control-plane 
	I0906 15:54:59.634562   37212 kubeadm.go:317] 
	I0906 15:54:59.634628   37212 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 15:54:59.634634   37212 kubeadm.go:317] 
	I0906 15:54:59.634703   37212 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634778   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:54:59.637971   37212 kubeadm.go:317] W0906 22:54:51.815271    7827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:54:59.638087   37212 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:54:59.638192   37212 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:54:59.638305   37212 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:54:59.638322   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:54:59.638333   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:54:59.638353   37212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:54:59.638418   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.638453   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=default-k8s-different-port-20220906154915-22187 minikube.k8s.io/updated_at=2022_09_06T15_54_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.652946   37212 ops.go:34] apiserver oom_adj: -16
	I0906 15:54:59.765132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.356297   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.855510   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.356044   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.855680   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.357560   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.855496   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.356576   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.857064   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.356922   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.855648   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.355509   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.856812   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.356378   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.856487   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.357002   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.855628   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.357475   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.855615   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.356132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.856796   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.355518   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.855528   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.356121   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.855538   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.915907   37212 kubeadm.go:1046] duration metric: took 12.277509448s to wait for elevateKubeSystemPrivileges.
	I0906 15:55:11.915924   37212 kubeadm.go:398] StartCluster complete in 4m43.659305517s
	I0906 15:55:11.915940   37212 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:11.916016   37212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:55:11.916547   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:12.432639   37212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220906154915-22187" rescaled to 1
	I0906 15:55:12.432672   37212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:55:12.432680   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:55:12.432706   37212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:55:12.456099   37212 out.go:177] * Verifying Kubernetes components...
	I0906 15:55:12.432831   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:55:12.456163   37212 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456171   37212 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456174   37212 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456176   37212 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.499149   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 15:55:12.529511   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:12.529526   37212 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529528   37212 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529535   37212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220906154915-22187"
	W0906 15:55:12.529545   37212 addons.go:162] addon dashboard should already be in state true
	W0906 15:55:12.529553   37212 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:55:12.529626   37212 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529660   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	W0906 15:55:12.529689   37212 addons.go:162] addon metrics-server should already be in state true
	I0906 15:55:12.529658   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.529766   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.530198   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531221   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531900   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.532011   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.549127   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.680879   37212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:55:12.640947   37212 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.661070   37212 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	W0906 15:55:12.680984   37212 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:55:12.718152   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.718210   37212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:12.775898   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:55:12.755048   37212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776017   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.850072   37212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776417   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.813187   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:55:12.829884   37212 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.887424   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:55:12.887573   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:55:12.887589   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:55:12.887599   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.888232   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.901144   37212 node_ready.go:49] node "default-k8s-different-port-20220906154915-22187" has status "Ready":"True"
	I0906 15:55:12.901168   37212 node_ready.go:38] duration metric: took 13.7942ms waiting for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.901178   37212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:12.916307   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:12.938564   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.974260   37212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:12.974271   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:55:12.974329   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.976572   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.979654   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.045815   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.108896   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:13.121508   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:55:13.121527   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:55:13.131848   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:55:13.131866   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:55:13.209761   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:55:13.209774   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:55:13.223186   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:13.302916   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.302940   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:55:13.309224   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:55:13.309237   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:55:13.327162   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.395379   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:55:13.428686   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:55:13.522685   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:55:13.522699   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:55:13.626615   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:55:13.626632   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:55:13.721707   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.192261292s)
	I0906 15:55:13.721737   37212 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 15:55:13.794268   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:55:13.794285   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:55:13.920312   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:55:13.920326   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:55:14.005171   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:55:14.005188   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:55:14.022831   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.022846   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:55:14.105185   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.326598   37212 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:14.935413   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:15.141698   37212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 15:55:15.178672   37212 addons.go:414] enableAddons completed in 2.745959213s
	I0906 15:55:16.937654   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:17.935795   37212 pod_ready.go:92] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:17.935809   37212 pod_ready.go:81] duration metric: took 5.01946616s waiting for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:17.935816   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446882   37212 pod_ready.go:92] pod "coredns-565d847f94-q4mb7" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.446896   37212 pod_ready.go:81] duration metric: took 511.073117ms waiting for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446904   37212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451838   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.451848   37212 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451854   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457179   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.457189   37212 pod_ready.go:81] duration metric: took 5.329087ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457196   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461768   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.461778   37212 pod_ready.go:81] duration metric: took 4.575554ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461784   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733119   37212 pod_ready.go:92] pod "kube-proxy-tmfkn" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.733129   37212 pod_ready.go:81] duration metric: took 271.339141ms waiting for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733137   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132361   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:19.132371   37212 pod_ready.go:81] duration metric: took 399.227312ms waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132376   37212 pod_ready.go:38] duration metric: took 6.231173997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:19.132390   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:55:19.132442   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:55:19.143302   37212 api_server.go:71] duration metric: took 6.710591857s to wait for apiserver process to appear ...
	I0906 15:55:19.143315   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:55:19.143323   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:55:19.148529   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:55:19.149651   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:55:19.149659   37212 api_server.go:130] duration metric: took 6.340438ms to wait for apiserver health ...
	I0906 15:55:19.149665   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:55:19.338022   37212 system_pods.go:59] 9 kube-system pods found
	I0906 15:55:19.338037   37212 system_pods.go:61] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.338041   37212 system_pods.go:61] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.338045   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.338049   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.338053   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.338059   37212 system_pods.go:61] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.338064   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.338069   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.338078   37212 system_pods.go:61] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.338082   37212 system_pods.go:74] duration metric: took 188.413972ms to wait for pod list to return data ...
	I0906 15:55:19.338089   37212 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:55:19.532218   37212 default_sa.go:45] found service account: "default"
	I0906 15:55:19.532231   37212 default_sa.go:55] duration metric: took 194.136492ms for default service account to be created ...
	I0906 15:55:19.532236   37212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:55:19.735925   37212 system_pods.go:86] 9 kube-system pods found
	I0906 15:55:19.735939   37212 system_pods.go:89] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.735944   37212 system_pods.go:89] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.735947   37212 system_pods.go:89] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.735957   37212 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.735962   37212 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.735968   37212 system_pods.go:89] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.735972   37212 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.735977   37212 system_pods.go:89] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.735981   37212 system_pods.go:89] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.735986   37212 system_pods.go:126] duration metric: took 203.746511ms to wait for k8s-apps to be running ...
	I0906 15:55:19.735991   37212 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:55:19.736042   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:19.746224   37212 system_svc.go:56] duration metric: took 10.227063ms WaitForService to wait for kubelet.
	I0906 15:55:19.746239   37212 kubeadm.go:573] duration metric: took 7.313531095s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:55:19.746256   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:55:19.935919   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:55:19.935936   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:55:19.935944   37212 node_conditions.go:105] duration metric: took 189.682536ms to run NodePressure ...
	I0906 15:55:19.935956   37212 start.go:216] waiting for startup goroutines ...
	I0906 15:55:19.974175   37212 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:55:20.010226   37212 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220906154915-22187" cluster and "default" namespace by default
	I0906 15:55:31.779661   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:55:31.779822   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:55:31.779830   36618 kubeadm.go:317] 
	I0906 15:55:31.779860   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:55:31.779889   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:55:31.779894   36618 kubeadm.go:317] 
	I0906 15:55:31.779921   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:55:31.779960   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:55:31.780052   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:55:31.780063   36618 kubeadm.go:317] 
	I0906 15:55:31.780169   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:55:31.780219   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:55:31.780247   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:55:31.780251   36618 kubeadm.go:317] 
	I0906 15:55:31.780328   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:55:31.780416   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:55:31.780495   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:55:31.780559   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:55:31.780661   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:55:31.780715   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:55:31.783923   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:55:31.784047   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:55:31.784168   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:55:31.784249   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:55:31.784306   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:55:31.784333   36618 kubeadm.go:398] StartCluster complete in 7m59.255788376s
	I0906 15:55:31.784406   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:55:31.816119   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.816135   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:55:31.816207   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:55:31.852948   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.852961   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:55:31.853021   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:55:31.884845   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.884856   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:55:31.884911   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:55:31.917054   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.917068   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:55:31.917132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:55:31.948382   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.948395   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:55:31.948451   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:55:31.982328   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.982339   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:55:31.982387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:55:32.013438   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.013450   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:55:32.013510   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:55:32.044826   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.044840   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:55:32.044847   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:55:32.044854   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:55:32.085941   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:55:32.085955   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:55:32.097748   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:55:32.097762   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:55:32.160044   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:55:32.160054   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:55:32.160060   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:55:32.174249   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:55:32.174260   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:55:34.234529   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060250655s)
	W0906 15:55:34.234640   36618 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:55:34.234654   36618 out.go:239] * 
	W0906 15:55:34.234769   36618 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.234800   36618 out.go:239] * 
	W0906 15:55:34.235311   36618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:55:34.299125   36618 out.go:177] 
	W0906 15:55:34.342220   36618 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.342329   36618 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:55:34.342385   36618 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:55:34.385240   36618 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 22:55:35 UTC. --
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.528204599Z" level=info msg="Processing signal 'terminated'"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529151410Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529777222Z" level=info msg="Daemon shutdown complete"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: docker.service: Succeeded.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.588828648Z" level=info msg="Starting up"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590571788Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590605888Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590631004Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590641853Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591550398Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591603148Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591645967Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591685874Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.595222522Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.599079518Z" level=info msg="Loading containers: start."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.676228835Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.708132289Z" level=info msg="Loading containers: done."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716192633Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716331649Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.738785771Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.741578122Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-09-06T22:55:38Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  22:55:38 up  1:11,  0 users,  load average: 0.73, 0.85, 0.97
	Linux old-k8s-version-20220906154143-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 22:55:38 UTC. --
	Sep 06 22:55:36 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: I0906 22:55:37.101275   14437 server.go:410] Version: v1.16.0
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: I0906 22:55:37.101485   14437 plugins.go:100] No cloud provider specified.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: I0906 22:55:37.101496   14437 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: I0906 22:55:37.103107   14437 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: W0906 22:55:37.103773   14437 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: W0906 22:55:37.103834   14437 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14437]: F0906 22:55:37.103859   14437 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: I0906 22:55:37.855741   14449 server.go:410] Version: v1.16.0
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: I0906 22:55:37.856003   14449 plugins.go:100] No cloud provider specified.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: I0906 22:55:37.856014   14449 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: I0906 22:55:37.858039   14449 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: W0906 22:55:37.858710   14449 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: W0906 22:55:37.858769   14449 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 kubelet[14449]: F0906 22:55:37.858791   14449 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 22:55:37 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:55:38.199620   37637 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (412.325834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220906154143-22187" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (491.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220906154156-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
E0906 15:48:37.723285   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187: exit status 2 (16.082670835s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
E0906 15:48:49.028670   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187: exit status 2 (16.082679797s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220906154156-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220906154156-22187
helpers_test.go:235: (dbg) docker inspect no-preload-20220906154156-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede",
	        "Created": "2022-09-06T22:41:58.377320769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:43:15.472214221Z",
	            "FinishedAt": "2022-09-06T22:43:13.481529189Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/hosts",
	        "LogPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede-json.log",
	        "Name": "/no-preload-20220906154156-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220906154156-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220906154156-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220906154156-22187",
	                "Source": "/var/lib/docker/volumes/no-preload-20220906154156-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220906154156-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220906154156-22187",
	                "name.minikube.sigs.k8s.io": "no-preload-20220906154156-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f475120e7a6ff49982d8ec081912e3dc66a486d0ef85fc958af19dbcdc2161cc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59521"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f475120e7a6f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220906154156-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6ea3f3b81380",
	                        "no-preload-20220906154156-22187"
	                    ],
	                    "NetworkID": "8d8c17b397b016d05dc5a51f986c20488e0188e802075059a5752f53758b1af6",
	                    "EndpointID": "a970f6b659717a274d02408b8fe682f40f9b381ce3edaef0c9004afb37e58b91",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220906154156-22187 logs -n 25
E0906 15:49:05.410450   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220906154156-22187 logs -n 25: (2.598306313s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                | kindnet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | kindnet-20220906152522-22187                      |                                         |         |         |                     |                     |
	| start   | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220906152522-22187 | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | enable-default-cni-20220906152522-22187           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220906152522-22187 | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | enable-default-cni-20220906152522-22187           |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:40 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:40 PDT |
	| start   | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:41 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:40 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	| start   | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:42 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:45 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:47:27
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:47:27.724326   36618 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:47:27.724481   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724486   36618 out.go:309] Setting ErrFile to fd 2...
	I0906 15:47:27.724490   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724596   36618 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:47:27.725040   36618 out.go:303] Setting JSON to false
	I0906 15:47:27.740136   36618 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10018,"bootTime":1662494429,"procs":332,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:47:27.740244   36618 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:47:27.762199   36618 out.go:177] * [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:47:27.804151   36618 notify.go:193] Checking for updates...
	I0906 15:47:27.826250   36618 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:47:27.848207   36618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:27.874086   36618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:47:27.895101   36618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:47:27.916094   36618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:47:27.937719   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:27.960007   36618 out.go:177] * Kubernetes 1.25.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.0
	I0906 15:47:27.980813   36618 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:47:28.050338   36618 docker.go:137] docker version: linux-20.10.17
	I0906 15:47:28.050475   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.182336   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.123754068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.224979   36618 out.go:177] * Using the docker driver based on existing profile
	I0906 15:47:28.245671   36618 start.go:284] selected driver: docker
	I0906 15:47:28.245703   36618 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.245851   36618 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:47:28.249022   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.379018   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.322340605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.379175   36618 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:47:28.379194   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:28.379205   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:28.379215   36618 start_flags.go:310] config:
	{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.421686   36618 out.go:177] * Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	I0906 15:47:28.442547   36618 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:47:28.463689   36618 out.go:177] * Pulling base image ...
	I0906 15:47:28.506539   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:28.506550   36618 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:47:28.506598   36618 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:47:28.506611   36618 cache.go:57] Caching tarball of preloaded images
	I0906 15:47:28.506757   36618 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:47:28.506777   36618 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 15:47:28.507478   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:28.570394   36618 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:47:28.570413   36618 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:47:28.570424   36618 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:47:28.570474   36618 start.go:364] acquiring machines lock for old-k8s-version-20220906154143-22187: {Name:mkf6412c70024633cc757c4659ae827dd641d20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:47:28.570554   36618 start.go:368] acquired machines lock for "old-k8s-version-20220906154143-22187" in 63.129µs
	I0906 15:47:28.570574   36618 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:47:28.570584   36618 fix.go:55] fixHost starting: 
	I0906 15:47:28.570821   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:28.634799   36618 fix.go:103] recreateIfNeeded on old-k8s-version-20220906154143-22187: state=Stopped err=<nil>
	W0906 15:47:28.634825   36618 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:47:28.677667   36618 out.go:177] * Restarting existing docker container for "old-k8s-version-20220906154143-22187" ...
	I0906 15:47:24.923934   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:26.924897   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:28.925869   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:28.698507   36618 cli_runner.go:164] Run: docker start old-k8s-version-20220906154143-22187
	I0906 15:47:29.031374   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:29.153450   36618 kic.go:415] container "old-k8s-version-20220906154143-22187" state is running.
	I0906 15:47:29.154026   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.222072   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:29.222435   36618 machine.go:88] provisioning docker machine ...
	I0906 15:47:29.222459   36618 ubuntu.go:169] provisioning hostname "old-k8s-version-20220906154143-22187"
	I0906 15:47:29.222536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.288956   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.289172   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.289186   36618 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220906154143-22187 && echo "old-k8s-version-20220906154143-22187" | sudo tee /etc/hostname
	I0906 15:47:29.409404   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220906154143-22187
	
	I0906 15:47:29.409506   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.474903   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.475053   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.475069   36618 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220906154143-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220906154143-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220906154143-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:47:29.588648   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:29.588669   36618 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:47:29.588700   36618 ubuntu.go:177] setting up certificates
	I0906 15:47:29.588721   36618 provision.go:83] configureAuth start
	I0906 15:47:29.588785   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.653294   36618 provision.go:138] copyHostCerts
	I0906 15:47:29.653379   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:47:29.653389   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:47:29.653484   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:47:29.653690   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:47:29.653700   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:47:29.653761   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:47:29.653906   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:47:29.653931   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:47:29.653991   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:47:29.654107   36618 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220906154143-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220906154143-22187]
	I0906 15:47:29.819591   36618 provision.go:172] copyRemoteCerts
	I0906 15:47:29.819655   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:47:29.819697   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.883624   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:29.965244   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:47:29.981832   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0906 15:47:29.998925   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:47:30.015333   36618 provision.go:86] duration metric: configureAuth took 426.595674ms
	I0906 15:47:30.015347   36618 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:47:30.015480   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:30.015536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.078928   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.079080   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.079097   36618 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:47:30.191405   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:47:30.191416   36618 ubuntu.go:71] root file system type: overlay
	I0906 15:47:30.191564   36618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:47:30.191653   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.257341   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.257518   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.257566   36618 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:47:30.378325   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:47:30.378415   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.442083   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.442233   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.442245   36618 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:47:30.558345   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:30.558369   36618 machine.go:91] provisioned docker machine in 1.335922482s
	I0906 15:47:30.558380   36618 start.go:300] post-start starting for "old-k8s-version-20220906154143-22187" (driver="docker")
	I0906 15:47:30.558385   36618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:47:30.558449   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:47:30.558496   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.623093   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.705359   36618 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:47:30.708767   36618 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:47:30.708781   36618 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:47:30.708788   36618 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:47:30.708793   36618 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:47:30.708801   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:47:30.708902   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:47:30.709047   36618 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:47:30.709191   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:47:30.716071   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:30.733188   36618 start.go:303] post-start completed in 174.799919ms
	I0906 15:47:30.733264   36618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:47:30.733307   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.797534   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.879275   36618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:47:30.883629   36618 fix.go:57] fixHost completed within 2.313039871s
	I0906 15:47:30.883640   36618 start.go:83] releasing machines lock for "old-k8s-version-20220906154143-22187", held for 2.313072798s
	I0906 15:47:30.883707   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948370   36618 ssh_runner.go:195] Run: systemctl --version
	I0906 15:47:30.948389   36618 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:47:30.948452   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948458   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:31.016338   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.016439   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.248577   36618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:47:31.259106   36618 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:47:31.259179   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:47:31.270476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:47:31.283021   36618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:47:31.353154   36618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:47:31.426585   36618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:47:31.501244   36618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:47:31.715701   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.753351   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.831581   36618 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0906 15:47:31.831765   36618 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220906154143-22187 dig +short host.docker.internal
	I0906 15:47:31.962726   36618 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:47:31.962882   36618 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:47:31.967458   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:31.977699   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.041454   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:32.041543   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.072812   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.072839   36618 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:47:32.072992   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.104153   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.104174   36618 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:47:32.104248   36618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:47:32.178837   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:32.178849   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:32.178864   36618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:47:32.178876   36618 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220906154143-22187 NodeName:old-k8s-version-20220906154143-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:47:32.178983   36618 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220906154143-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220906154143-22187
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:47:32.179051   36618 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220906154143-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:47:32.179104   36618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0906 15:47:32.186748   36618 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:47:32.186801   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:47:32.194237   36618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0906 15:47:32.207073   36618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:47:32.219494   36618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0906 15:47:32.231803   36618 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:47:32.235747   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:32.245191   36618 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187 for IP: 192.168.67.2
	I0906 15:47:32.245304   36618 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:47:32.245353   36618 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:47:32.245429   36618 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key
	I0906 15:47:32.245528   36618 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e
	I0906 15:47:32.245585   36618 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key
	I0906 15:47:32.245795   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:47:32.245830   36618 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:47:32.245842   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:47:32.245883   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:47:32.245913   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:47:32.245939   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:47:32.246002   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:32.246567   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:47:32.263431   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:47:32.280089   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:47:32.296976   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:47:32.313479   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:47:32.330881   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:47:32.347457   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:47:32.364209   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:47:32.381370   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:47:32.398376   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:47:32.415314   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:47:32.435759   36618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:47:32.448194   36618 ssh_runner.go:195] Run: openssl version
	I0906 15:47:32.453444   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:47:32.461315   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465115   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465156   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.470177   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:47:32.477357   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:47:32.486000   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490512   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490562   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.495831   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:47:32.503224   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:47:32.510979   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514699   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514745   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.519767   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:47:32.527226   36618 kubeadm.go:396] StartCluster: {Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:32.527360   36618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:32.556441   36618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:47:32.563997   36618 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:47:32.564011   36618 kubeadm.go:627] restartCluster start
	I0906 15:47:32.564056   36618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:47:32.571007   36618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:32.571067   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.636552   36618 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:32.636751   36618 kubeconfig.go:127] "old-k8s-version-20220906154143-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:47:32.637095   36618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:47:32.638467   36618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:47:32.646914   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.646978   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.655320   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:31.423739   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:33.426307   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:32.857447   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.857626   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.867436   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.055442   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.055550   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.064764   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.255502   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.255571   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.264739   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.457093   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.457154   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.466479   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.656960   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.657112   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.666024   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.855454   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.855536   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.865698   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.056197   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.056330   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.066451   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.255620   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.255698   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.265530   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.456233   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.456324   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.465752   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.657449   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.657577   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.667461   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.856463   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.856602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.867085   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.055895   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.056016   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.065978   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.257473   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.257650   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.268029   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.455491   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.455556   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.466826   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.657485   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.657645   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.667632   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.667642   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.667684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.675713   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.675723   36618 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:47:35.675732   36618 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:47:35.675789   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:35.705109   36618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:47:35.715429   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:47:35.723190   36618 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Sep  6 22:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Sep  6 22:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Sep  6 22:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Sep  6 22:43 /etc/kubernetes/scheduler.conf
	
	I0906 15:47:35.723254   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:47:35.730810   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:47:35.738212   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:47:35.745776   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:47:35.753962   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761363   36618 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761377   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:35.813510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.680895   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.890193   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.953067   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:37.007310   36618 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:47:37.007369   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:37.515752   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:35.923079   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:37.924999   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:38.017852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:38.517627   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.017853   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.516530   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.016953   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.516341   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.017684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.516454   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.017850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.516815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.424300   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:42.426950   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:43.918419   36235 pod_ready.go:81] duration metric: took 4m0.072197501s waiting for pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace to be "Ready" ...
	E0906 15:47:43.918436   36235 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 15:47:43.918447   36235 pod_ready.go:38] duration metric: took 4m14.120338871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:47:43.918495   36235 kubeadm.go:631] restartCluster took 4m24.896287854s
	W0906 15:47:43.918570   36235 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 15:47:43.918586   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 15:47:43.015747   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:43.517836   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.017465   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.515754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.015795   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.515857   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.015952   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.515728   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.015825   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.515705   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.252291   36235 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.333677266s)
	I0906 15:47:48.252349   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:47:48.262002   36235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:48.269405   36235 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:47:48.269449   36235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:47:48.277327   36235 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:47:48.277359   36235 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:47:48.317989   36235 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 15:47:48.318026   36235 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:47:48.417304   36235 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:47:48.417396   36235 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:47:48.417478   36235 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:47:48.541595   36235 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:47:48.562972   36235 out.go:204]   - Generating certificates and keys ...
	I0906 15:47:48.563021   36235 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:47:48.563091   36235 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:47:48.563173   36235 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:47:48.563227   36235 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:47:48.563328   36235 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:47:48.563374   36235 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:47:48.563428   36235 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:47:48.563486   36235 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:47:48.563570   36235 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:47:48.563636   36235 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:47:48.563668   36235 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:47:48.563707   36235 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:47:48.833960   36235 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:47:49.099973   36235 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:47:49.343100   36235 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:47:49.443942   36235 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:47:49.456526   36235 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:47:49.457063   36235 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:47:49.457094   36235 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:47:49.530288   36235 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:47:48.015772   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.516034   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.015789   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.516635   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.015757   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.015748   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.517724   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.016065   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.516074   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.551894   36235 out.go:204]   - Booting up control plane ...
	I0906 15:47:49.551978   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:47:49.552040   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:47:49.552108   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:47:49.552174   36235 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:47:49.552326   36235 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:47:55.535092   36235 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002671 seconds
	I0906 15:47:55.535186   36235 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 15:47:55.542271   36235 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 15:47:56.055146   36235 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 15:47:56.055306   36235 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220906154156-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 15:47:56.562432   36235 kubeadm.go:317] [bootstrap-token] Using token: mcb4oi.u2w1oe6vxlxjfpx3
	I0906 15:47:53.016794   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:53.516769   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.015802   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.516398   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.015770   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.517646   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.016754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.517915   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.015874   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.517815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.601464   36235 out.go:204]   - Configuring RBAC rules ...
	I0906 15:47:56.601575   36235 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 15:47:56.601643   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 15:47:56.641208   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 15:47:56.643306   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 15:47:56.645139   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 15:47:56.647038   36235 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 15:47:56.655615   36235 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 15:47:56.801367   36235 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 15:47:56.970199   36235 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 15:47:56.970885   36235 kubeadm.go:317] 
	I0906 15:47:56.970955   36235 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 15:47:56.970967   36235 kubeadm.go:317] 
	I0906 15:47:56.971024   36235 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 15:47:56.971031   36235 kubeadm.go:317] 
	I0906 15:47:56.971052   36235 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 15:47:56.971109   36235 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 15:47:56.971164   36235 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 15:47:56.971174   36235 kubeadm.go:317] 
	I0906 15:47:56.971224   36235 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 15:47:56.971234   36235 kubeadm.go:317] 
	I0906 15:47:56.971278   36235 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 15:47:56.971287   36235 kubeadm.go:317] 
	I0906 15:47:56.971325   36235 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 15:47:56.971403   36235 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 15:47:56.971510   36235 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 15:47:56.971519   36235 kubeadm.go:317] 
	I0906 15:47:56.971578   36235 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 15:47:56.971660   36235 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 15:47:56.971668   36235 kubeadm.go:317] 
	I0906 15:47:56.971752   36235 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token mcb4oi.u2w1oe6vxlxjfpx3 \
	I0906 15:47:56.971848   36235 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 15:47:56.971881   36235 kubeadm.go:317] 	--control-plane 
	I0906 15:47:56.971894   36235 kubeadm.go:317] 
	I0906 15:47:56.971972   36235 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 15:47:56.971979   36235 kubeadm.go:317] 
	I0906 15:47:56.972032   36235 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token mcb4oi.u2w1oe6vxlxjfpx3 \
	I0906 15:47:56.972097   36235 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:47:56.975523   36235 kubeadm.go:317] W0906 22:47:48.322413    7890 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:47:56.975665   36235 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:47:56.975727   36235 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:47:56.975827   36235 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:47:56.975848   36235 cni.go:95] Creating CNI manager for ""
	I0906 15:47:56.975856   36235 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:56.975871   36235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:47:56.975972   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:56.975973   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=no-preload-20220906154156-22187 minikube.k8s.io/updated_at=2022_09_06T15_47_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:57.011452   36235 ops.go:34] apiserver oom_adj: -16
	I0906 15:47:57.149466   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:57.710791   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.212221   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.711933   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:59.211648   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.015852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:58.516201   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.016002   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.515787   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.017830   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.516806   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.015847   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.516910   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.016851   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.517315   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.711025   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:00.212072   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:00.712060   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:01.211123   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:01.711481   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:02.210969   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:02.710737   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.210635   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.710789   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:04.210794   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:03.516678   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.017779   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.517538   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.016029   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.516024   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.016955   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.516680   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.017903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.517898   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.710808   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:05.210845   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:05.712748   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:06.210668   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:06.711457   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:07.212733   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:07.710645   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:08.212201   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:08.712700   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:09.210804   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:09.422281   36235 kubeadm.go:1046] duration metric: took 12.446355264s to wait for elevateKubeSystemPrivileges.
	I0906 15:48:09.422300   36235 kubeadm.go:398] StartCluster complete in 4m50.435904288s
	I0906 15:48:09.422319   36235 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:48:09.422390   36235 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:48:09.422978   36235 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:48:09.937253   36235 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220906154156-22187" rescaled to 1
	I0906 15:48:09.937287   36235 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:48:09.937295   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:48:09.937313   36235 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:48:09.960624   36235 out.go:177] * Verifying Kubernetes components...
	I0906 15:48:09.937453   36235 config.go:180] Loaded profile config "no-preload-20220906154156-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:48:09.960682   36235 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960681   36235 addons.go:65] Setting dashboard=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960681   36235 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960711   36235 addons.go:65] Setting metrics-server=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.980494   36235 addons.go:153] Setting addon metrics-server=true in "no-preload-20220906154156-22187"
	W0906 15:48:09.980504   36235 addons.go:162] addon metrics-server should already be in state true
	I0906 15:48:09.980504   36235 addons.go:153] Setting addon dashboard=true in "no-preload-20220906154156-22187"
	W0906 15:48:09.980566   36235 addons.go:162] addon dashboard should already be in state true
	I0906 15:48:09.980574   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:09.980577   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:48:09.980511   36235 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220906154156-22187"
	I0906 15:48:09.980590   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	W0906 15:48:09.980597   36235 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:48:09.980518   36235 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220906154156-22187"
	I0906 15:48:09.980626   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:09.980876   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.980887   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.980942   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.981390   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:10.008119   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.008264   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 15:48:10.082564   36235 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220906154156-22187"
	W0906 15:48:10.099577   36235 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:48:10.099546   36235 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:48:10.099633   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:10.121403   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:48:10.142121   36235 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:48:10.142134   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:48:10.121864   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:10.142208   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.163355   36235 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:48:10.184356   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:48:10.184368   36235 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:48:10.184499   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.227230   36235 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 15:48:10.253392   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:48:10.253413   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:48:10.253505   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.256709   36235 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220906154156-22187" to be "Ready" ...
	I0906 15:48:10.268170   36235 node_ready.go:49] node "no-preload-20220906154156-22187" has status "Ready":"True"
	I0906 15:48:10.268183   36235 node_ready.go:38] duration metric: took 11.34914ms waiting for node "no-preload-20220906154156-22187" to be "Ready" ...
	I0906 15:48:10.268191   36235 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:48:10.269285   36235 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:48:10.269303   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:48:10.269393   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.272640   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.279072   36235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-8kwg7" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:10.287591   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.340399   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.349408   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.379572   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:48:10.379583   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:48:10.394531   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:48:10.401434   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:48:10.401446   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:48:10.419319   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:48:10.419334   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:48:10.434945   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:48:10.436708   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:48:10.436721   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:48:10.441562   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:48:10.510954   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:48:10.510968   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:48:10.597909   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:48:10.597943   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:48:10.635463   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:48:10.635476   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:48:10.724230   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:48:10.724243   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:48:10.809449   36235 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 15:48:10.810185   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:48:10.810198   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:48:10.831489   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:48:10.831503   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:48:10.924259   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:48:10.924282   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:48:11.020863   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:48:11.020885   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:48:11.098604   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:48:11.132510   36235 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220906154156-22187"
	I0906 15:48:11.942349   36235 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0906 15:48:08.017766   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:08.516568   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.017963   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.516751   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.016603   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.515832   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.015880   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.015846   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.515867   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.978411   36235 addons.go:414] enableAddons completed in 2.041098799s
	I0906 15:48:12.301175   36235 pod_ready.go:102] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"False"
	I0906 15:48:13.015821   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:13.515843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.017921   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.015965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.516522   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.015903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.515800   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.015904   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.515890   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.796966   36235 pod_ready.go:102] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"False"
	I0906 15:48:16.296411   36235 pod_ready.go:92] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.296426   36235 pod_ready.go:81] duration metric: took 6.017319795s waiting for pod "coredns-565d847f94-8kwg7" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.296434   36235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-mqnzj" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.302113   36235 pod_ready.go:92] pod "coredns-565d847f94-mqnzj" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.302126   36235 pod_ready.go:81] duration metric: took 5.686272ms waiting for pod "coredns-565d847f94-mqnzj" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.302135   36235 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.308452   36235 pod_ready.go:92] pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.308461   36235 pod_ready.go:81] duration metric: took 6.320797ms waiting for pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.308468   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.312981   36235 pod_ready.go:92] pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.312991   36235 pod_ready.go:81] duration metric: took 4.518776ms waiting for pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.312997   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.317752   36235 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.317762   36235 pod_ready.go:81] duration metric: took 4.759615ms waiting for pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.317768   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85lwm" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.693753   36235 pod_ready.go:92] pod "kube-proxy-85lwm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.693763   36235 pod_ready.go:81] duration metric: took 375.989665ms waiting for pod "kube-proxy-85lwm" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.693771   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:17.094847   36235 pod_ready.go:92] pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:17.094858   36235 pod_ready.go:81] duration metric: took 401.058363ms waiting for pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:17.094863   36235 pod_ready.go:38] duration metric: took 6.826644879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:48:17.094877   36235 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:48:17.094923   36235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.105083   36235 api_server.go:71] duration metric: took 7.167759192s to wait for apiserver process to appear ...
	I0906 15:48:17.105099   36235 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:48:17.105109   36235 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59521/healthz ...
	I0906 15:48:17.110476   36235 api_server.go:266] https://127.0.0.1:59521/healthz returned 200:
	ok
	I0906 15:48:17.111716   36235 api_server.go:140] control plane version: v1.25.0
	I0906 15:48:17.111724   36235 api_server.go:130] duration metric: took 6.620173ms to wait for apiserver health ...
	I0906 15:48:17.111732   36235 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:48:17.296978   36235 system_pods.go:59] 9 kube-system pods found
	I0906 15:48:17.296991   36235 system_pods.go:61] "coredns-565d847f94-8kwg7" [6dc3f46c-764e-41ed-8bb1-d475e0fb346d] Running
	I0906 15:48:17.296995   36235 system_pods.go:61] "coredns-565d847f94-mqnzj" [6c277a1a-5f42-45f3-b1ac-20b7a030c5e3] Running
	I0906 15:48:17.296998   36235 system_pods.go:61] "etcd-no-preload-20220906154156-22187" [d293ae93-c12f-4d61-8843-8726be90988e] Running
	I0906 15:48:17.297003   36235 system_pods.go:61] "kube-apiserver-no-preload-20220906154156-22187" [2e0c2b15-cc49-4adb-96f9-7d6d357a6f67] Running
	I0906 15:48:17.297007   36235 system_pods.go:61] "kube-controller-manager-no-preload-20220906154156-22187" [50b21e87-f3e3-49b5-9d7c-793afe2c7a89] Running
	I0906 15:48:17.297011   36235 system_pods.go:61] "kube-proxy-85lwm" [b58d2960-b28e-45dc-ad87-ce8a61130c78] Running
	I0906 15:48:17.297017   36235 system_pods.go:61] "kube-scheduler-no-preload-20220906154156-22187" [21304ce8-b2e8-4c39-b113-e4b28c6dd61f] Running
	I0906 15:48:17.297022   36235 system_pods.go:61] "metrics-server-5c8fd5cf8-dsmkc" [aeeb9062-f6d0-49c4-b625-66e11226d676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:48:17.297027   36235 system_pods.go:61] "storage-provisioner" [1c03a38b-d8c6-44e4-8404-b8bb5cbad02c] Running
	I0906 15:48:17.297032   36235 system_pods.go:74] duration metric: took 185.294386ms to wait for pod list to return data ...
	I0906 15:48:17.297038   36235 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:48:17.493254   36235 default_sa.go:45] found service account: "default"
	I0906 15:48:17.493267   36235 default_sa.go:55] duration metric: took 196.222978ms for default service account to be created ...
	I0906 15:48:17.493276   36235 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:48:17.696748   36235 system_pods.go:86] 9 kube-system pods found
	I0906 15:48:17.696762   36235 system_pods.go:89] "coredns-565d847f94-8kwg7" [6dc3f46c-764e-41ed-8bb1-d475e0fb346d] Running
	I0906 15:48:17.696768   36235 system_pods.go:89] "coredns-565d847f94-mqnzj" [6c277a1a-5f42-45f3-b1ac-20b7a030c5e3] Running
	I0906 15:48:17.696771   36235 system_pods.go:89] "etcd-no-preload-20220906154156-22187" [d293ae93-c12f-4d61-8843-8726be90988e] Running
	I0906 15:48:17.696775   36235 system_pods.go:89] "kube-apiserver-no-preload-20220906154156-22187" [2e0c2b15-cc49-4adb-96f9-7d6d357a6f67] Running
	I0906 15:48:17.696780   36235 system_pods.go:89] "kube-controller-manager-no-preload-20220906154156-22187" [50b21e87-f3e3-49b5-9d7c-793afe2c7a89] Running
	I0906 15:48:17.696784   36235 system_pods.go:89] "kube-proxy-85lwm" [b58d2960-b28e-45dc-ad87-ce8a61130c78] Running
	I0906 15:48:17.696789   36235 system_pods.go:89] "kube-scheduler-no-preload-20220906154156-22187" [21304ce8-b2e8-4c39-b113-e4b28c6dd61f] Running
	I0906 15:48:17.696794   36235 system_pods.go:89] "metrics-server-5c8fd5cf8-dsmkc" [aeeb9062-f6d0-49c4-b625-66e11226d676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:48:17.696799   36235 system_pods.go:89] "storage-provisioner" [1c03a38b-d8c6-44e4-8404-b8bb5cbad02c] Running
	I0906 15:48:17.696804   36235 system_pods.go:126] duration metric: took 203.523587ms to wait for k8s-apps to be running ...
	I0906 15:48:17.696810   36235 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:48:17.696862   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:48:17.706938   36235 system_svc.go:56] duration metric: took 10.121838ms WaitForService to wait for kubelet.
	I0906 15:48:17.706952   36235 kubeadm.go:573] duration metric: took 7.769631523s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:48:17.706972   36235 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:48:17.893458   36235 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:48:17.893470   36235 node_conditions.go:123] node cpu capacity is 6
	I0906 15:48:17.893499   36235 node_conditions.go:105] duration metric: took 186.518729ms to run NodePressure ...
	I0906 15:48:17.893519   36235 start.go:216] waiting for startup goroutines ...
	I0906 15:48:17.927336   36235 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:48:18.002668   36235 out.go:177] * Done! kubectl is now configured to use "no-preload-20220906154156-22187" cluster and "default" namespace by default
	I0906 15:48:18.016702   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:18.516309   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.015893   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.515844   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.015875   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.015861   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.515854   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.015816   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.515876   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.016575   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.516149   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.016188   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.515905   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.016602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.518008   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.016339   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.517230   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.016823   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.516887   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.017965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.517474   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.017430   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.518014   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.516342   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.017840   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.516300   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.016103   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.517934   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.015945   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.516276   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.016960   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.517486   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.018019   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.516988   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.018005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.516078   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:37.018027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:37.047583   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.047595   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:37.047651   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:37.076314   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.076326   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:37.076388   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:37.105746   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.105758   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:37.105817   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:37.133889   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.133902   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:37.133959   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:37.163122   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.163133   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:37.163190   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:37.191877   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.191889   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:37.191961   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:37.220968   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.220981   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:37.221041   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:37.249271   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.249284   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:37.249291   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:37.249297   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:37.289900   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:37.289914   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:37.301542   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:37.301557   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:37.353958   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:37.353972   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:37.353979   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:37.368054   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:37.368066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:39.423867   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055782714s)
	I0906 15:48:41.924165   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:42.016977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:42.047623   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.047635   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:42.047691   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:42.077331   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.077346   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:42.077407   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:42.107184   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.107199   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:42.107261   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:42.139027   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.139041   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:42.139107   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:42.175702   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.175713   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:42.175776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:42.205201   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.205215   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:42.205276   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:42.234618   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.234630   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:42.234693   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:42.263411   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.263423   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:42.263430   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:42.263436   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:42.303796   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:42.303810   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:42.315377   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:42.315391   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:42.369166   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:42.369179   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:42.369186   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:42.383742   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:42.383754   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:44.433916   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050143742s)
	I0906 15:48:46.934245   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:47.016004   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:47.046573   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.046585   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:47.046640   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:47.077019   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.077031   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:47.077092   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:47.107321   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.107334   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:47.107389   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:47.137709   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.137721   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:47.137777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:47.169281   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.169295   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:47.169355   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:47.197280   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.197292   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:47.197350   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:47.226913   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.226930   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:47.226989   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:47.257981   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.257992   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:47.258000   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:47.258006   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:49.312362   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054338446s)
	I0906 15:48:49.312470   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:49.312476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:49.351688   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:49.351702   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:49.363819   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:49.363836   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:49.415301   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:49.415311   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:49.415318   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:51.930431   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:52.016736   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:52.046820   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.046831   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:52.046886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:52.075587   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.075599   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:52.075657   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:52.105073   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.105085   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:52.105140   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:52.134789   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.134801   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:52.134864   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:52.162762   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.162782   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:52.162837   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:52.191879   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.191891   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:52.191962   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:52.221137   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.221149   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:52.221204   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:52.250240   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.250253   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:52.250259   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:52.250273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:52.290244   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:52.290261   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:52.301674   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:52.301688   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:52.353298   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:52.353309   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:52.353316   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:52.366721   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:52.366733   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:54.420553   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053802778s)
	I0906 15:48:56.923005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:57.018057   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:57.049543   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.049554   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:57.049612   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:57.078691   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.078706   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:57.078777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:57.108669   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.108686   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:57.108764   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:57.141982   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.141996   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:57.142054   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:57.172447   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.172459   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:57.172522   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:57.200955   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.200971   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:57.201030   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:57.229233   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.229245   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:57.229306   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:57.258367   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.258379   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:57.258386   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:57.258394   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:57.271869   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:57.271881   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:59.326190   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054291416s)
	I0906 15:48:59.326348   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:59.326355   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:59.367821   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:59.367839   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:59.379672   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:59.379685   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:59.432111   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:01.932831   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:02.018145   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:02.048231   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.048244   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:02.048299   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:02.077507   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.077520   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:02.077580   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:02.106702   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.106713   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:02.106771   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:02.135555   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.135567   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:02.135631   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:02.164516   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.164529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:02.164588   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:02.191790   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.191803   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:02.191862   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:02.220273   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.220286   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:02.220351   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:02.249683   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.249695   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:02.249702   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:02.249709   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:02.261264   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:02.261276   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:02.317306   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:02.317320   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:02.317326   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:02.333052   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:02.333066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:43:15 UTC, end at Tue 2022-09-06 22:49:05 UTC. --
	Sep 06 22:47:46 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:46.952700922Z" level=info msg="ignoring event" container=ee4f19981cb61bfd85e16b849cb4c33183ca3f65d0e5260805bf400385a53ca2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.029541272Z" level=info msg="ignoring event" container=c80313488929258dea7004f105f2474d5761cb4ea10f9e15760e94f53840d517 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.170677095Z" level=info msg="ignoring event" container=ef9160dae1f69cf48ef7d2c0f21e61e5f0db958d2e222068ac7085239d739217 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.285945800Z" level=info msg="ignoring event" container=a1f9474dbd2baca9a329890916de3bfba7074729eccdac714e6098047db71e85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.350078663Z" level=info msg="ignoring event" container=4df5f446a56dcca28e1946ed92d2e66891a0833df9c4b409a02cf2802390e68a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.439793895Z" level=info msg="ignoring event" container=a595cade58a41868868ccf9184ce8772ff98183c6772fb07e6cd2aced028aedd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.506007062Z" level=info msg="ignoring event" container=a5c4cf0e4faf76b1b93d84809d448f9bb69c6528fcf32cb4185d5a4bd2601115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.622030519Z" level=info msg="ignoring event" container=2646311a73c58d6c5a3ae3f9c034cf97f75b96129bb3d7f9637f4fbc72844d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.692220768Z" level=info msg="ignoring event" container=81df2999200e0a690f0898ccd2c7cfb6243ec4d3b8e56d974270b3042f93e19c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.759552096Z" level=info msg="ignoring event" container=dfbbeec10ffbaf2a7a913d9246dbb592447d71794feae941ac5c18143cd324f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.832299685Z" level=info msg="ignoring event" container=1f0693f776848ad7602ff9d4a6ad3b688a75782c7a250620f7989666e9aea946 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.919298832Z" level=info msg="ignoring event" container=fdbe147768cecf48451ca47565f0f2803de4010768eae96522a94778caedccef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.725719172Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.725759575Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.727067919Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:13 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:13.247698754Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.300963455Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.481494706Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.562671068Z" level=info msg="ignoring event" container=2ca3845f19ae5805108af0ad9a0701cfd70498ad9a75e83f4ddc56f8860d4e22 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.609836356Z" level=info msg="ignoring event" container=3804df3cb07bf29599955f7fc54d63c9d879f3baa43fe55a0eec3cfc52ceb762 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:22 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:22.083622764Z" level=info msg="ignoring event" container=e27a0526083f54a65c04ba6f1b18a27acba4f3e3c8fd81e005e022fb57391d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:22 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:22.883038186Z" level=info msg="ignoring event" container=1fcf202fcdfbeb93cc684861bd69f29a9ff537b915cec520fb3e3f18d6ed0212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.944382485Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.944677438Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.946092759Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	1fcf202fcdfbe       a90209bb39e3d                                                                                    44 seconds ago       Exited              dashboard-metrics-scraper   1                   92c3b6305000b
	8b8e2ca2957f1       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   49 seconds ago       Running             kubernetes-dashboard        0                   d3c142ba9c3bf
	11e8e600f961f       5185b96f0becf                                                                                    54 seconds ago       Running             coredns                     0                   e98be20667dfc
	210181832a52a       6e38f40d628db                                                                                    54 seconds ago       Running             storage-provisioner         0                   bbcf3683e916c
	5c40cf8f5a3f6       58a9a0c6d96f2                                                                                    55 seconds ago       Running             kube-proxy                  0                   0c77e9e329546
	ed6a7f025fdda       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   eaea5f607cbe4
	b108852ace062       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   d53cb8fdb8695
	60e8e52ff56a3       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   adca1e69ac77a
	900da704596f4       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   fe44d9b9e12ba
	
	* 
	* ==> coredns [11e8e600f961] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220906154156-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220906154156-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=no-preload-20220906154156-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_47_56_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:47:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220906154156-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:49:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220906154156-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                17e68662-37d6-4cb5-b265-48d4c864fb32
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-8kwg7                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     57s
	  kube-system                 etcd-no-preload-20220906154156-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kube-apiserver-no-preload-20220906154156-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-no-preload-20220906154156-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-85lwm                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-no-preload-20220906154156-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 metrics-server-5c8fd5cf8-dsmkc                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-5qmp7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-4v92l                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 70s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  70s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientPID
	  Normal  NodeReady                69s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeReady
	  Normal  RegisteredNode           57s   node-controller  Node no-preload-20220906154156-22187 event: Registered Node no-preload-20220906154156-22187 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [60e8e52ff56a] <==
	* {"level":"info","ts":"2022-09-06T22:47:51.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:47:51.028Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:47:51.028Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220906154156-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:47:51.823Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:47:51.826Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:49:06 up  1:05,  0 users,  load average: 0.53, 0.87, 1.02
	Linux no-preload-20220906154156-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b108852ace06] <==
	* I0906 22:47:54.843351       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 22:47:54.843381       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:47:55.103074       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:47:55.130887       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:47:55.163984       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 22:47:55.167455       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 22:47:55.168127       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:47:55.170964       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 22:47:55.864944       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:47:56.808409       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:47:56.815086       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 22:47:56.821382       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:47:56.886376       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:48:09.327233       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 22:48:09.500305       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:48:11.125193       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.64.120]
	I0906 22:48:11.903731       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.171.205]
	I0906 22:48:11.912968       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.187.198]
	W0906 22:48:11.924536       1 handler_proxy.go:102] no RequestInfo found in the context
	W0906 22:48:11.924561       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:48:11.924580       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:48:11.924586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0906 22:48:11.924607       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:48:11.925762       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [900da704596f] <==
	* I0906 22:48:09.855387       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:48:09.923483       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:48:09.923537       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:48:10.929140       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:48:11.012324       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0906 22:48:11.020786       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0906 22:48:11.033525       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-dsmkc"
	I0906 22:48:11.721881       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 22:48:11.727411       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.730885       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.732941       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	E0906 22:48:11.735039       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.735114       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.735135       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.740407       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:48:11.742893       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.742905       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.743320       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.743325       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:48:11.750942       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.751009       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.800961       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-5qmp7"
	I0906 22:48:11.802947       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-4v92l"
	E0906 22:49:03.524129       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 22:49:03.579654       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [5c40cf8f5a3f] <==
	* I0906 22:48:11.521118       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:48:11.521193       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:48:11.521229       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:48:11.543298       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:48:11.543363       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:48:11.543372       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:48:11.543382       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:48:11.543396       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:48:11.543485       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:48:11.543591       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:48:11.543598       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:48:11.543941       1 config.go:317] "Starting service config controller"
	I0906 22:48:11.543973       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:48:11.543989       1 config.go:444] "Starting node config controller"
	I0906 22:48:11.543992       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:48:11.544848       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:48:11.544875       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:48:11.644060       1 shared_informer.go:262] Caches are synced for node config
	I0906 22:48:11.644145       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:48:11.645250       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [ed6a7f025fdd] <==
	* W0906 22:47:53.929224       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:47:53.929303       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:47:53.929342       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:47:53.929388       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:47:53.930317       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 22:47:53.930333       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 22:47:53.930408       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:47:53.930453       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 22:47:53.930433       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 22:47:53.930555       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 22:47:53.930564       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930575       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:53.930655       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930688       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:53.930724       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:47:53.930730       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:47:53.930772       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 22:47:53.930830       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:47:53.930885       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930898       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:54.805752       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:54.805958       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:54.936756       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:47:54.936818       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0906 22:47:55.225449       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:43:15 UTC, end at Tue 2022-09-06 22:49:07 UTC. --
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.976898   11051 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998001   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b58d2960-b28e-45dc-ad87-ce8a61130c78-lib-modules\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998089   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/63c69301-4caf-4664-9ea7-a02f276da821-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-5qmp7\" (UID: \"63c69301-4caf-4664-9ea7-a02f276da821\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-5qmp7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998110   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwv26\" (UniqueName: \"kubernetes.io/projected/14065c20-36c8-457b-a3ae-c7a4132e59f4-kube-api-access-hwv26\") pod \"kubernetes-dashboard-54596f475f-4v92l\" (UID: \"14065c20-36c8-457b-a3ae-c7a4132e59f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-4v92l"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998127   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aeeb9062-f6d0-49c4-b625-66e11226d676-tmp-dir\") pod \"metrics-server-5c8fd5cf8-dsmkc\" (UID: \"aeeb9062-f6d0-49c4-b625-66e11226d676\") " pod="kube-system/metrics-server-5c8fd5cf8-dsmkc"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998144   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgz7r\" (UniqueName: \"kubernetes.io/projected/aeeb9062-f6d0-49c4-b625-66e11226d676-kube-api-access-zgz7r\") pod \"metrics-server-5c8fd5cf8-dsmkc\" (UID: \"aeeb9062-f6d0-49c4-b625-66e11226d676\") " pod="kube-system/metrics-server-5c8fd5cf8-dsmkc"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998198   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d8f\" (UniqueName: \"kubernetes.io/projected/1c03a38b-d8c6-44e4-8404-b8bb5cbad02c-kube-api-access-58d8f\") pod \"storage-provisioner\" (UID: \"1c03a38b-d8c6-44e4-8404-b8bb5cbad02c\") " pod="kube-system/storage-provisioner"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998237   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/14065c20-36c8-457b-a3ae-c7a4132e59f4-tmp-volume\") pod \"kubernetes-dashboard-54596f475f-4v92l\" (UID: \"14065c20-36c8-457b-a3ae-c7a4132e59f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-4v92l"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998257   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1c03a38b-d8c6-44e4-8404-b8bb5cbad02c-tmp\") pod \"storage-provisioner\" (UID: \"1c03a38b-d8c6-44e4-8404-b8bb5cbad02c\") " pod="kube-system/storage-provisioner"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998275   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ccf\" (UniqueName: \"kubernetes.io/projected/b58d2960-b28e-45dc-ad87-ce8a61130c78-kube-api-access-47ccf\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998357   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spphr\" (UniqueName: \"kubernetes.io/projected/6dc3f46c-764e-41ed-8bb1-d475e0fb346d-kube-api-access-spphr\") pod \"coredns-565d847f94-8kwg7\" (UID: \"6dc3f46c-764e-41ed-8bb1-d475e0fb346d\") " pod="kube-system/coredns-565d847f94-8kwg7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998479   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqcm\" (UniqueName: \"kubernetes.io/projected/63c69301-4caf-4664-9ea7-a02f276da821-kube-api-access-wxqcm\") pod \"dashboard-metrics-scraper-7b94984548-5qmp7\" (UID: \"63c69301-4caf-4664-9ea7-a02f276da821\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-5qmp7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998554   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b58d2960-b28e-45dc-ad87-ce8a61130c78-kube-proxy\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998629   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dc3f46c-764e-41ed-8bb1-d475e0fb346d-config-volume\") pod \"coredns-565d847f94-8kwg7\" (UID: \"6dc3f46c-764e-41ed-8bb1-d475e0fb346d\") " pod="kube-system/coredns-565d847f94-8kwg7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998680   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b58d2960-b28e-45dc-ad87-ce8a61130c78-xtables-lock\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998839   11051 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:06.152716   11051 request.go:601] Waited for 1.07830952s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.191571   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220906154156-22187\" already exists" pod="kube-system/etcd-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.356168   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-scheduler-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.608615   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-apiserver-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.769773   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220906154156-22187"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168815   11051 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168875   11051 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168978   11051 kuberuntime_manager.go:862] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zgz7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c8fd5cf8-dsmkc_kube-system(aeeb9062-f6d0-49c4-b625-66e11226d676): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.169004   11051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c8fd5cf8-dsmkc" podUID=aeeb9062-f6d0-49c4-b625-66e11226d676
	
	* 
	* ==> kubernetes-dashboard [8b8e2ca2957f] <==
	* 2022/09/06 22:48:18 Using namespace: kubernetes-dashboard
	2022/09/06 22:48:18 Using in-cluster config to connect to apiserver
	2022/09/06 22:48:18 Using secret token for csrf signing
	2022/09/06 22:48:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 22:48:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 22:48:18 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 22:48:18 Generating JWE encryption key
	2022/09/06 22:48:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 22:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 22:48:18 Initializing JWE encryption key from synchronized object
	2022/09/06 22:48:18 Creating in-cluster Sidecar client
	2022/09/06 22:48:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:48:18 Serving insecurely on HTTP port: 9090
	2022/09/06 22:49:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:48:18 Starting overwatch
	
	* 
	* ==> storage-provisioner [210181832a52] <==
	* I0906 22:48:12.417194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:48:12.436616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:48:12.436681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:48:12.443991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:48:12.444167       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583!
	I0906 22:48:12.444259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d43032c-4079-4ede-a0a8-32450d421e51", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583 became leader
	I0906 22:48:12.544411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
E0906 15:49:08.175013   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-dsmkc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc: exit status 1 (58.586082ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-dsmkc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220906154156-22187
helpers_test.go:235: (dbg) docker inspect no-preload-20220906154156-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede",
	        "Created": "2022-09-06T22:41:58.377320769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:43:15.472214221Z",
	            "FinishedAt": "2022-09-06T22:43:13.481529189Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/hosts",
	        "LogPath": "/var/lib/docker/containers/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede/6ea3f3b81380db57d4c01190869279427b16aedef35cd4dc48e93924c1fdaede-json.log",
	        "Name": "/no-preload-20220906154156-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220906154156-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220906154156-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4420df071e7c74d36ffa79d07572668f22f4c9f965efb57f9446b32baed1c3fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220906154156-22187",
	                "Source": "/var/lib/docker/volumes/no-preload-20220906154156-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220906154156-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220906154156-22187",
	                "name.minikube.sigs.k8s.io": "no-preload-20220906154156-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f475120e7a6ff49982d8ec081912e3dc66a486d0ef85fc958af19dbcdc2161cc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59521"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f475120e7a6f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220906154156-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6ea3f3b81380",
	                        "no-preload-20220906154156-22187"
	                    ],
	                    "NetworkID": "8d8c17b397b016d05dc5a51f986c20488e0188e802075059a5752f53758b1af6",
	                    "EndpointID": "a970f6b659717a274d02408b8fe682f40f9b381ce3edaef0c9004afb37e58b91",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220906154156-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220906154156-22187 logs -n 25: (2.570901767s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                | kindnet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | kindnet-20220906152522-22187                      |                                         |         |         |                     |                     |
	| start   | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220906152522-22187 | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | enable-default-cni-20220906152522-22187           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220906152522-22187 | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | enable-default-cni-20220906152522-22187           |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:39 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:39 PDT | 06 Sep 22 15:40 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220906152522-22187                    | bridge-20220906152522-22187             | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:40 PDT |
	| start   | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:41 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:40 PDT | 06 Sep 22 15:40 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187             | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	| start   | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220906152522-22187            | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | kubenet-20220906152522-22187                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:42 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:45 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220906154143-22187    | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220906154156-22187         | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:47:27
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:47:27.724326   36618 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:47:27.724481   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724486   36618 out.go:309] Setting ErrFile to fd 2...
	I0906 15:47:27.724490   36618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:47:27.724596   36618 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:47:27.725040   36618 out.go:303] Setting JSON to false
	I0906 15:47:27.740136   36618 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10018,"bootTime":1662494429,"procs":332,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:47:27.740244   36618 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:47:27.762199   36618 out.go:177] * [old-k8s-version-20220906154143-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:47:27.804151   36618 notify.go:193] Checking for updates...
	I0906 15:47:27.826250   36618 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:47:27.848207   36618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:27.874086   36618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:47:27.895101   36618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:47:27.916094   36618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:47:27.937719   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:27.960007   36618 out.go:177] * Kubernetes 1.25.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.0
	I0906 15:47:27.980813   36618 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:47:28.050338   36618 docker.go:137] docker version: linux-20.10.17
	I0906 15:47:28.050475   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.182336   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.123754068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.224979   36618 out.go:177] * Using the docker driver based on existing profile
	I0906 15:47:28.245671   36618 start.go:284] selected driver: docker
	I0906 15:47:28.245703   36618 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.245851   36618 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:47:28.249022   36618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:47:28.379018   36618 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:47:28.322340605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:47:28.379175   36618 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:47:28.379194   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:28.379205   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:28.379215   36618 start_flags.go:310] config:
	{Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:28.421686   36618 out.go:177] * Starting control plane node old-k8s-version-20220906154143-22187 in cluster old-k8s-version-20220906154143-22187
	I0906 15:47:28.442547   36618 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:47:28.463689   36618 out.go:177] * Pulling base image ...
	I0906 15:47:28.506539   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:28.506550   36618 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:47:28.506598   36618 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0906 15:47:28.506611   36618 cache.go:57] Caching tarball of preloaded images
	I0906 15:47:28.506757   36618 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:47:28.506777   36618 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 15:47:28.507478   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:28.570394   36618 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:47:28.570413   36618 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:47:28.570424   36618 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:47:28.570474   36618 start.go:364] acquiring machines lock for old-k8s-version-20220906154143-22187: {Name:mkf6412c70024633cc757c4659ae827dd641d20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:47:28.570554   36618 start.go:368] acquired machines lock for "old-k8s-version-20220906154143-22187" in 63.129µs
	I0906 15:47:28.570574   36618 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:47:28.570584   36618 fix.go:55] fixHost starting: 
	I0906 15:47:28.570821   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:28.634799   36618 fix.go:103] recreateIfNeeded on old-k8s-version-20220906154143-22187: state=Stopped err=<nil>
	W0906 15:47:28.634825   36618 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:47:28.677667   36618 out.go:177] * Restarting existing docker container for "old-k8s-version-20220906154143-22187" ...
	I0906 15:47:24.923934   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:26.924897   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:28.925869   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:28.698507   36618 cli_runner.go:164] Run: docker start old-k8s-version-20220906154143-22187
	I0906 15:47:29.031374   36618 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220906154143-22187 --format={{.State.Status}}
	I0906 15:47:29.153450   36618 kic.go:415] container "old-k8s-version-20220906154143-22187" state is running.
	I0906 15:47:29.154026   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.222072   36618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/config.json ...
	I0906 15:47:29.222435   36618 machine.go:88] provisioning docker machine ...
	I0906 15:47:29.222459   36618 ubuntu.go:169] provisioning hostname "old-k8s-version-20220906154143-22187"
	I0906 15:47:29.222536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.288956   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.289172   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.289186   36618 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220906154143-22187 && echo "old-k8s-version-20220906154143-22187" | sudo tee /etc/hostname
	I0906 15:47:29.409404   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220906154143-22187
	
	I0906 15:47:29.409506   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.474903   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:29.475053   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:29.475069   36618 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220906154143-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220906154143-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220906154143-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:47:29.588648   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:29.588669   36618 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:47:29.588700   36618 ubuntu.go:177] setting up certificates
	I0906 15:47:29.588721   36618 provision.go:83] configureAuth start
	I0906 15:47:29.588785   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:29.653294   36618 provision.go:138] copyHostCerts
	I0906 15:47:29.653379   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:47:29.653389   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:47:29.653484   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:47:29.653690   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:47:29.653700   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:47:29.653761   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:47:29.653906   36618 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:47:29.653931   36618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:47:29.653991   36618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:47:29.654107   36618 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220906154143-22187 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220906154143-22187]
	I0906 15:47:29.819591   36618 provision.go:172] copyRemoteCerts
	I0906 15:47:29.819655   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:47:29.819697   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:29.883624   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:29.965244   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:47:29.981832   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0906 15:47:29.998925   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:47:30.015333   36618 provision.go:86] duration metric: configureAuth took 426.595674ms
	I0906 15:47:30.015347   36618 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:47:30.015480   36618 config.go:180] Loaded profile config "old-k8s-version-20220906154143-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 15:47:30.015536   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.078928   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.079080   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.079097   36618 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:47:30.191405   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:47:30.191416   36618 ubuntu.go:71] root file system type: overlay
	I0906 15:47:30.191564   36618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:47:30.191653   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.257341   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.257518   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.257566   36618 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:47:30.378325   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:47:30.378415   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.442083   36618 main.go:134] libmachine: Using SSH client type: native
	I0906 15:47:30.442233   36618 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59556 <nil> <nil>}
	I0906 15:47:30.442245   36618 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:47:30.558345   36618 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:47:30.558369   36618 machine.go:91] provisioned docker machine in 1.335922482s
	I0906 15:47:30.558380   36618 start.go:300] post-start starting for "old-k8s-version-20220906154143-22187" (driver="docker")
	I0906 15:47:30.558385   36618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:47:30.558449   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:47:30.558496   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.623093   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.705359   36618 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:47:30.708767   36618 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:47:30.708781   36618 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:47:30.708788   36618 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:47:30.708793   36618 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:47:30.708801   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:47:30.708902   36618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:47:30.709047   36618 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:47:30.709191   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:47:30.716071   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:30.733188   36618 start.go:303] post-start completed in 174.799919ms
	I0906 15:47:30.733264   36618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:47:30.733307   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.797534   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:30.879275   36618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:47:30.883629   36618 fix.go:57] fixHost completed within 2.313039871s
	I0906 15:47:30.883640   36618 start.go:83] releasing machines lock for "old-k8s-version-20220906154143-22187", held for 2.313072798s
	I0906 15:47:30.883707   36618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948370   36618 ssh_runner.go:195] Run: systemctl --version
	I0906 15:47:30.948389   36618 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0906 15:47:30.948452   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:30.948458   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:31.016338   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.016439   36618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59556 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/old-k8s-version-20220906154143-22187/id_rsa Username:docker}
	I0906 15:47:31.248577   36618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:47:31.259106   36618 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:47:31.259179   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:47:31.270476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:47:31.283021   36618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:47:31.353154   36618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:47:31.426585   36618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:47:31.501244   36618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:47:31.715701   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.753351   36618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:47:31.831581   36618 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0906 15:47:31.831765   36618 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220906154143-22187 dig +short host.docker.internal
	I0906 15:47:31.962726   36618 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:47:31.962882   36618 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:47:31.967458   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:31.977699   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.041454   36618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 15:47:32.041543   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.072812   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.072839   36618 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:47:32.072992   36618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:47:32.104153   36618 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0906 15:47:32.104174   36618 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:47:32.104248   36618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:47:32.178837   36618 cni.go:95] Creating CNI manager for ""
	I0906 15:47:32.178849   36618 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:32.178864   36618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:47:32.178876   36618 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220906154143-22187 NodeName:old-k8s-version-20220906154143-22187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:47:32.178983   36618 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220906154143-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220906154143-22187
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:47:32.179051   36618 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220906154143-22187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:47:32.179104   36618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0906 15:47:32.186748   36618 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:47:32.186801   36618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:47:32.194237   36618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0906 15:47:32.207073   36618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:47:32.219494   36618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0906 15:47:32.231803   36618 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:47:32.235747   36618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:47:32.245191   36618 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187 for IP: 192.168.67.2
	I0906 15:47:32.245304   36618 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:47:32.245353   36618 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:47:32.245429   36618 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/client.key
	I0906 15:47:32.245528   36618 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key.c7fa3a9e
	I0906 15:47:32.245585   36618 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key
	I0906 15:47:32.245795   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:47:32.245830   36618 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:47:32.245842   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:47:32.245883   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:47:32.245913   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:47:32.245939   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:47:32.246002   36618 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:47:32.246567   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:47:32.263431   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:47:32.280089   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:47:32.296976   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/old-k8s-version-20220906154143-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:47:32.313479   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:47:32.330881   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:47:32.347457   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:47:32.364209   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:47:32.381370   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:47:32.398376   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:47:32.415314   36618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:47:32.435759   36618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:47:32.448194   36618 ssh_runner.go:195] Run: openssl version
	I0906 15:47:32.453444   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:47:32.461315   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465115   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.465156   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:47:32.470177   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:47:32.477357   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:47:32.486000   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490512   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.490562   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:47:32.495831   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:47:32.503224   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:47:32.510979   36618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514699   36618 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.514745   36618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:47:32.519767   36618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:47:32.527226   36618 kubeadm.go:396] StartCluster: {Name:old-k8s-version-20220906154143-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220906154143-22187 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:47:32.527360   36618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:32.556441   36618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:47:32.563997   36618 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:47:32.564011   36618 kubeadm.go:627] restartCluster start
	I0906 15:47:32.564056   36618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:47:32.571007   36618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:32.571067   36618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220906154143-22187
	I0906 15:47:32.636552   36618 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220906154143-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:47:32.636751   36618 kubeconfig.go:127] "old-k8s-version-20220906154143-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:47:32.637095   36618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:47:32.638467   36618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:47:32.646914   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.646978   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.655320   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:31.423739   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:33.426307   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:32.857447   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:32.857626   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:32.867436   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.055442   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.055550   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.064764   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.255502   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.255571   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.264739   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.457093   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.457154   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.466479   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.656960   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.657112   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.666024   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:33.855454   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:33.855536   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:33.865698   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.056197   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.056330   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.066451   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.255620   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.255698   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.265530   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.456233   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.456324   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.465752   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.657449   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.657577   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.667461   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:34.856463   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:34.856602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:34.867085   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.055895   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.056016   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.065978   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.257473   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.257650   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.268029   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.455491   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.455556   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.466826   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.657485   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.657645   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.667632   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.667642   36618 api_server.go:165] Checking apiserver status ...
	I0906 15:47:35.667684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:47:35.675713   36618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:47:35.675723   36618 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:47:35.675732   36618 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:47:35.675789   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:47:35.705109   36618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:47:35.715429   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:47:35.723190   36618 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Sep  6 22:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Sep  6 22:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Sep  6 22:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Sep  6 22:43 /etc/kubernetes/scheduler.conf
	
	I0906 15:47:35.723254   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:47:35.730810   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:47:35.738212   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:47:35.745776   36618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:47:35.753962   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761363   36618 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:35.761377   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:35.813510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.680895   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.890193   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:36.953067   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:47:37.007310   36618 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:47:37.007369   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:37.515752   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:35.923079   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:37.924999   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:38.017852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:38.517627   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.017853   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:39.516530   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.016953   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.516341   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.017684   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:41.516454   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.017850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:42.516815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:40.424300   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:42.426950   36235 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace has status "Ready":"False"
	I0906 15:47:43.918419   36235 pod_ready.go:81] duration metric: took 4m0.072197501s waiting for pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace to be "Ready" ...
	E0906 15:47:43.918436   36235 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-dkqvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 15:47:43.918447   36235 pod_ready.go:38] duration metric: took 4m14.120338871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:47:43.918495   36235 kubeadm.go:631] restartCluster took 4m24.896287854s
	W0906 15:47:43.918570   36235 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 15:47:43.918586   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 15:47:43.015747   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:43.517836   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.017465   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:44.515754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.015795   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:45.515857   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.015952   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:46.515728   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.015825   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:47.515705   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.252291   36235 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.333677266s)
	I0906 15:47:48.252349   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:47:48.262002   36235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:47:48.269405   36235 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:47:48.269449   36235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:47:48.277327   36235 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:47:48.277359   36235 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:47:48.317989   36235 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 15:47:48.318026   36235 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:47:48.417304   36235 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:47:48.417396   36235 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:47:48.417478   36235 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:47:48.541595   36235 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:47:48.562972   36235 out.go:204]   - Generating certificates and keys ...
	I0906 15:47:48.563021   36235 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:47:48.563091   36235 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:47:48.563173   36235 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:47:48.563227   36235 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:47:48.563328   36235 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:47:48.563374   36235 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:47:48.563428   36235 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:47:48.563486   36235 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:47:48.563570   36235 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:47:48.563636   36235 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:47:48.563668   36235 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:47:48.563707   36235 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:47:48.833960   36235 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:47:49.099973   36235 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:47:49.343100   36235 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:47:49.443942   36235 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:47:49.456526   36235 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:47:49.457063   36235 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:47:49.457094   36235 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:47:49.530288   36235 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:47:48.015772   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:48.516034   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.015789   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.516635   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.015757   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:50.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.015748   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:51.517724   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.016065   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:52.516074   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:49.551894   36235 out.go:204]   - Booting up control plane ...
	I0906 15:47:49.551978   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:47:49.552040   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:47:49.552108   36235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:47:49.552174   36235 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:47:49.552326   36235 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:47:55.535092   36235 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002671 seconds
	I0906 15:47:55.535186   36235 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 15:47:55.542271   36235 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 15:47:56.055146   36235 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 15:47:56.055306   36235 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220906154156-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 15:47:56.562432   36235 kubeadm.go:317] [bootstrap-token] Using token: mcb4oi.u2w1oe6vxlxjfpx3
	I0906 15:47:53.016794   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:53.516769   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.015802   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:54.516398   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.015770   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:55.517646   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.016754   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.517915   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.015874   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:57.517815   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:56.601464   36235 out.go:204]   - Configuring RBAC rules ...
	I0906 15:47:56.601575   36235 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 15:47:56.601643   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 15:47:56.641208   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 15:47:56.643306   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 15:47:56.645139   36235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 15:47:56.647038   36235 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 15:47:56.655615   36235 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 15:47:56.801367   36235 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 15:47:56.970199   36235 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 15:47:56.970885   36235 kubeadm.go:317] 
	I0906 15:47:56.970955   36235 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 15:47:56.970967   36235 kubeadm.go:317] 
	I0906 15:47:56.971024   36235 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 15:47:56.971031   36235 kubeadm.go:317] 
	I0906 15:47:56.971052   36235 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 15:47:56.971109   36235 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 15:47:56.971164   36235 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 15:47:56.971174   36235 kubeadm.go:317] 
	I0906 15:47:56.971224   36235 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 15:47:56.971234   36235 kubeadm.go:317] 
	I0906 15:47:56.971278   36235 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 15:47:56.971287   36235 kubeadm.go:317] 
	I0906 15:47:56.971325   36235 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 15:47:56.971403   36235 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 15:47:56.971510   36235 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 15:47:56.971519   36235 kubeadm.go:317] 
	I0906 15:47:56.971578   36235 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 15:47:56.971660   36235 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 15:47:56.971668   36235 kubeadm.go:317] 
	I0906 15:47:56.971752   36235 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token mcb4oi.u2w1oe6vxlxjfpx3 \
	I0906 15:47:56.971848   36235 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 15:47:56.971881   36235 kubeadm.go:317] 	--control-plane 
	I0906 15:47:56.971894   36235 kubeadm.go:317] 
	I0906 15:47:56.971972   36235 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 15:47:56.971979   36235 kubeadm.go:317] 
	I0906 15:47:56.972032   36235 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token mcb4oi.u2w1oe6vxlxjfpx3 \
	I0906 15:47:56.972097   36235 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:47:56.975523   36235 kubeadm.go:317] W0906 22:47:48.322413    7890 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:47:56.975665   36235 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:47:56.975727   36235 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:47:56.975827   36235 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:47:56.975848   36235 cni.go:95] Creating CNI manager for ""
	I0906 15:47:56.975856   36235 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:47:56.975871   36235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:47:56.975972   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:56.975973   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=no-preload-20220906154156-22187 minikube.k8s.io/updated_at=2022_09_06T15_47_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:57.011452   36235 ops.go:34] apiserver oom_adj: -16
	I0906 15:47:57.149466   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:57.710791   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.212221   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.711933   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:59.211648   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:47:58.015852   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:58.516201   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.016002   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.515787   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.017830   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:00.516806   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.015847   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:01.516910   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.016851   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:02.517315   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:47:59.711025   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:00.212072   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:00.712060   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:01.211123   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:01.711481   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:02.210969   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:02.710737   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.210635   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.710789   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:04.210794   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:03.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:03.516678   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.017779   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.517538   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.016029   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:05.516024   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.016955   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:06.516680   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.017903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:07.517898   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:04.710808   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:05.210845   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:05.712748   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:06.210668   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:06.711457   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:07.212733   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:07.710645   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:08.212201   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:08.712700   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:09.210804   36235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:48:09.422281   36235 kubeadm.go:1046] duration metric: took 12.446355264s to wait for elevateKubeSystemPrivileges.
	I0906 15:48:09.422300   36235 kubeadm.go:398] StartCluster complete in 4m50.435904288s
	I0906 15:48:09.422319   36235 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:48:09.422390   36235 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:48:09.422978   36235 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:48:09.937253   36235 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220906154156-22187" rescaled to 1
	I0906 15:48:09.937287   36235 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:48:09.937295   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:48:09.937313   36235 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:48:09.960624   36235 out.go:177] * Verifying Kubernetes components...
	I0906 15:48:09.937453   36235 config.go:180] Loaded profile config "no-preload-20220906154156-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:48:09.960682   36235 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960681   36235 addons.go:65] Setting dashboard=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960681   36235 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.960711   36235 addons.go:65] Setting metrics-server=true in profile "no-preload-20220906154156-22187"
	I0906 15:48:09.980494   36235 addons.go:153] Setting addon metrics-server=true in "no-preload-20220906154156-22187"
	W0906 15:48:09.980504   36235 addons.go:162] addon metrics-server should already be in state true
	I0906 15:48:09.980504   36235 addons.go:153] Setting addon dashboard=true in "no-preload-20220906154156-22187"
	W0906 15:48:09.980566   36235 addons.go:162] addon dashboard should already be in state true
	I0906 15:48:09.980574   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:09.980577   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:48:09.980511   36235 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220906154156-22187"
	I0906 15:48:09.980590   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	W0906 15:48:09.980597   36235 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:48:09.980518   36235 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220906154156-22187"
	I0906 15:48:09.980626   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:09.980876   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.980887   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.980942   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:09.981390   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:10.008119   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.008264   36235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 15:48:10.082564   36235 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220906154156-22187"
	W0906 15:48:10.099577   36235 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:48:10.099546   36235 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:48:10.099633   36235 host.go:66] Checking if "no-preload-20220906154156-22187" exists ...
	I0906 15:48:10.121403   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:48:10.142121   36235 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:48:10.142134   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:48:10.121864   36235 cli_runner.go:164] Run: docker container inspect no-preload-20220906154156-22187 --format={{.State.Status}}
	I0906 15:48:10.142208   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.163355   36235 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:48:10.184356   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:48:10.184368   36235 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:48:10.184499   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.227230   36235 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 15:48:10.253392   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:48:10.253413   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:48:10.253505   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.256709   36235 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220906154156-22187" to be "Ready" ...
	I0906 15:48:10.268170   36235 node_ready.go:49] node "no-preload-20220906154156-22187" has status "Ready":"True"
	I0906 15:48:10.268183   36235 node_ready.go:38] duration metric: took 11.34914ms waiting for node "no-preload-20220906154156-22187" to be "Ready" ...
	I0906 15:48:10.268191   36235 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:48:10.269285   36235 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:48:10.269303   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:48:10.269393   36235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220906154156-22187
	I0906 15:48:10.272640   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.279072   36235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-8kwg7" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:10.287591   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.340399   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.349408   36235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59517 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/no-preload-20220906154156-22187/id_rsa Username:docker}
	I0906 15:48:10.379572   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:48:10.379583   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:48:10.394531   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:48:10.401434   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:48:10.401446   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:48:10.419319   36235 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:48:10.419334   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:48:10.434945   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:48:10.436708   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:48:10.436721   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:48:10.441562   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:48:10.510954   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:48:10.510968   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:48:10.597909   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:48:10.597943   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:48:10.635463   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:48:10.635476   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:48:10.724230   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:48:10.724243   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:48:10.809449   36235 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 15:48:10.810185   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:48:10.810198   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:48:10.831489   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:48:10.831503   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:48:10.924259   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:48:10.924282   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:48:11.020863   36235 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:48:11.020885   36235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:48:11.098604   36235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:48:11.132510   36235 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220906154156-22187"
	I0906 15:48:11.942349   36235 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0906 15:48:08.017766   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:08.516568   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.017963   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:09.516751   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.016603   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:10.515832   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.015880   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.015846   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:12.515867   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:11.978411   36235 addons.go:414] enableAddons completed in 2.041098799s
	I0906 15:48:12.301175   36235 pod_ready.go:102] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"False"
	I0906 15:48:13.015821   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:13.515843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.017921   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.515835   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.015965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:15.516522   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.015903   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:16.515800   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.015904   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.515890   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:14.796966   36235 pod_ready.go:102] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"False"
	I0906 15:48:16.296411   36235 pod_ready.go:92] pod "coredns-565d847f94-8kwg7" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.296426   36235 pod_ready.go:81] duration metric: took 6.017319795s waiting for pod "coredns-565d847f94-8kwg7" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.296434   36235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-mqnzj" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.302113   36235 pod_ready.go:92] pod "coredns-565d847f94-mqnzj" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.302126   36235 pod_ready.go:81] duration metric: took 5.686272ms waiting for pod "coredns-565d847f94-mqnzj" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.302135   36235 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.308452   36235 pod_ready.go:92] pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.308461   36235 pod_ready.go:81] duration metric: took 6.320797ms waiting for pod "etcd-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.308468   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.312981   36235 pod_ready.go:92] pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.312991   36235 pod_ready.go:81] duration metric: took 4.518776ms waiting for pod "kube-apiserver-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.312997   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.317752   36235 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.317762   36235 pod_ready.go:81] duration metric: took 4.759615ms waiting for pod "kube-controller-manager-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.317768   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85lwm" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.693753   36235 pod_ready.go:92] pod "kube-proxy-85lwm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:16.693763   36235 pod_ready.go:81] duration metric: took 375.989665ms waiting for pod "kube-proxy-85lwm" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:16.693771   36235 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:17.094847   36235 pod_ready.go:92] pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:48:17.094858   36235 pod_ready.go:81] duration metric: took 401.058363ms waiting for pod "kube-scheduler-no-preload-20220906154156-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:48:17.094863   36235 pod_ready.go:38] duration metric: took 6.826644879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:48:17.094877   36235 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:48:17.094923   36235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:17.105083   36235 api_server.go:71] duration metric: took 7.167759192s to wait for apiserver process to appear ...
	I0906 15:48:17.105099   36235 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:48:17.105109   36235 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59521/healthz ...
	I0906 15:48:17.110476   36235 api_server.go:266] https://127.0.0.1:59521/healthz returned 200:
	ok
	I0906 15:48:17.111716   36235 api_server.go:140] control plane version: v1.25.0
	I0906 15:48:17.111724   36235 api_server.go:130] duration metric: took 6.620173ms to wait for apiserver health ...
	I0906 15:48:17.111732   36235 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:48:17.296978   36235 system_pods.go:59] 9 kube-system pods found
	I0906 15:48:17.296991   36235 system_pods.go:61] "coredns-565d847f94-8kwg7" [6dc3f46c-764e-41ed-8bb1-d475e0fb346d] Running
	I0906 15:48:17.296995   36235 system_pods.go:61] "coredns-565d847f94-mqnzj" [6c277a1a-5f42-45f3-b1ac-20b7a030c5e3] Running
	I0906 15:48:17.296998   36235 system_pods.go:61] "etcd-no-preload-20220906154156-22187" [d293ae93-c12f-4d61-8843-8726be90988e] Running
	I0906 15:48:17.297003   36235 system_pods.go:61] "kube-apiserver-no-preload-20220906154156-22187" [2e0c2b15-cc49-4adb-96f9-7d6d357a6f67] Running
	I0906 15:48:17.297007   36235 system_pods.go:61] "kube-controller-manager-no-preload-20220906154156-22187" [50b21e87-f3e3-49b5-9d7c-793afe2c7a89] Running
	I0906 15:48:17.297011   36235 system_pods.go:61] "kube-proxy-85lwm" [b58d2960-b28e-45dc-ad87-ce8a61130c78] Running
	I0906 15:48:17.297017   36235 system_pods.go:61] "kube-scheduler-no-preload-20220906154156-22187" [21304ce8-b2e8-4c39-b113-e4b28c6dd61f] Running
	I0906 15:48:17.297022   36235 system_pods.go:61] "metrics-server-5c8fd5cf8-dsmkc" [aeeb9062-f6d0-49c4-b625-66e11226d676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:48:17.297027   36235 system_pods.go:61] "storage-provisioner" [1c03a38b-d8c6-44e4-8404-b8bb5cbad02c] Running
	I0906 15:48:17.297032   36235 system_pods.go:74] duration metric: took 185.294386ms to wait for pod list to return data ...
	I0906 15:48:17.297038   36235 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:48:17.493254   36235 default_sa.go:45] found service account: "default"
	I0906 15:48:17.493267   36235 default_sa.go:55] duration metric: took 196.222978ms for default service account to be created ...
	I0906 15:48:17.493276   36235 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:48:17.696748   36235 system_pods.go:86] 9 kube-system pods found
	I0906 15:48:17.696762   36235 system_pods.go:89] "coredns-565d847f94-8kwg7" [6dc3f46c-764e-41ed-8bb1-d475e0fb346d] Running
	I0906 15:48:17.696768   36235 system_pods.go:89] "coredns-565d847f94-mqnzj" [6c277a1a-5f42-45f3-b1ac-20b7a030c5e3] Running
	I0906 15:48:17.696771   36235 system_pods.go:89] "etcd-no-preload-20220906154156-22187" [d293ae93-c12f-4d61-8843-8726be90988e] Running
	I0906 15:48:17.696775   36235 system_pods.go:89] "kube-apiserver-no-preload-20220906154156-22187" [2e0c2b15-cc49-4adb-96f9-7d6d357a6f67] Running
	I0906 15:48:17.696780   36235 system_pods.go:89] "kube-controller-manager-no-preload-20220906154156-22187" [50b21e87-f3e3-49b5-9d7c-793afe2c7a89] Running
	I0906 15:48:17.696784   36235 system_pods.go:89] "kube-proxy-85lwm" [b58d2960-b28e-45dc-ad87-ce8a61130c78] Running
	I0906 15:48:17.696789   36235 system_pods.go:89] "kube-scheduler-no-preload-20220906154156-22187" [21304ce8-b2e8-4c39-b113-e4b28c6dd61f] Running
	I0906 15:48:17.696794   36235 system_pods.go:89] "metrics-server-5c8fd5cf8-dsmkc" [aeeb9062-f6d0-49c4-b625-66e11226d676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:48:17.696799   36235 system_pods.go:89] "storage-provisioner" [1c03a38b-d8c6-44e4-8404-b8bb5cbad02c] Running
	I0906 15:48:17.696804   36235 system_pods.go:126] duration metric: took 203.523587ms to wait for k8s-apps to be running ...
	I0906 15:48:17.696810   36235 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:48:17.696862   36235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:48:17.706938   36235 system_svc.go:56] duration metric: took 10.121838ms WaitForService to wait for kubelet.
	I0906 15:48:17.706952   36235 kubeadm.go:573] duration metric: took 7.769631523s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:48:17.706972   36235 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:48:17.893458   36235 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:48:17.893470   36235 node_conditions.go:123] node cpu capacity is 6
	I0906 15:48:17.893499   36235 node_conditions.go:105] duration metric: took 186.518729ms to run NodePressure ...
	I0906 15:48:17.893519   36235 start.go:216] waiting for startup goroutines ...
	I0906 15:48:17.927336   36235 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:48:18.002668   36235 out.go:177] * Done! kubectl is now configured to use "no-preload-20220906154156-22187" cluster and "default" namespace by default
	I0906 15:48:18.016702   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:18.516309   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.015893   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:19.515844   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.015875   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:20.515860   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.015861   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:21.515854   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.015816   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:22.515876   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.016575   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:23.516149   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.016188   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:24.515905   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.016602   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:25.518008   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.016339   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:26.517230   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.016823   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:27.516887   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.017965   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:28.517474   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.017430   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:29.518014   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.015916   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:30.516342   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.017840   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:31.516300   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.016103   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:32.517934   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.015945   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:33.516276   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.016960   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:34.517486   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.018019   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:35.516988   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.018005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:36.516078   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:37.018027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:37.047583   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.047595   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:37.047651   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:37.076314   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.076326   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:37.076388   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:37.105746   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.105758   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:37.105817   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:37.133889   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.133902   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:37.133959   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:37.163122   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.163133   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:37.163190   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:37.191877   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.191889   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:37.191961   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:37.220968   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.220981   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:37.221041   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:37.249271   36618 logs.go:274] 0 containers: []
	W0906 15:48:37.249284   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:37.249291   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:37.249297   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:37.289900   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:37.289914   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:37.301542   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:37.301557   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:37.353958   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:37.353972   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:37.353979   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:37.368054   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:37.368066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:39.423867   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055782714s)
	I0906 15:48:41.924165   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:42.016977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:42.047623   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.047635   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:42.047691   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:42.077331   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.077346   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:42.077407   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:42.107184   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.107199   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:42.107261   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:42.139027   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.139041   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:42.139107   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:42.175702   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.175713   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:42.175776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:42.205201   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.205215   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:42.205276   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:42.234618   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.234630   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:42.234693   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:42.263411   36618 logs.go:274] 0 containers: []
	W0906 15:48:42.263423   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:42.263430   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:42.263436   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:42.303796   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:42.303810   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:42.315377   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:42.315391   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:42.369166   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:42.369179   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:42.369186   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:42.383742   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:42.383754   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:44.433916   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050143742s)
	I0906 15:48:46.934245   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:47.016004   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:47.046573   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.046585   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:47.046640   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:47.077019   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.077031   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:47.077092   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:47.107321   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.107334   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:47.107389   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:47.137709   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.137721   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:47.137777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:47.169281   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.169295   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:47.169355   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:47.197280   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.197292   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:47.197350   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:47.226913   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.226930   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:47.226989   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:47.257981   36618 logs.go:274] 0 containers: []
	W0906 15:48:47.257992   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:47.258000   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:47.258006   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:49.312362   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054338446s)
	I0906 15:48:49.312470   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:49.312476   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:49.351688   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:49.351702   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:49.363819   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:49.363836   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:49.415301   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:49.415311   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:49.415318   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:51.930431   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:52.016736   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:52.046820   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.046831   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:52.046886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:52.075587   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.075599   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:52.075657   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:52.105073   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.105085   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:52.105140   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:52.134789   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.134801   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:52.134864   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:52.162762   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.162782   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:52.162837   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:52.191879   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.191891   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:52.191962   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:52.221137   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.221149   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:52.221204   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:52.250240   36618 logs.go:274] 0 containers: []
	W0906 15:48:52.250253   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:52.250259   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:52.250273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:52.290244   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:52.290261   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:52.301674   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:52.301688   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:52.353298   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:48:52.353309   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:52.353316   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:52.366721   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:52.366733   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:54.420553   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053802778s)
	I0906 15:48:56.923005   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:48:57.018057   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:48:57.049543   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.049554   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:48:57.049612   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:48:57.078691   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.078706   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:48:57.078777   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:48:57.108669   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.108686   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:48:57.108764   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:48:57.141982   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.141996   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:48:57.142054   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:48:57.172447   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.172459   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:48:57.172522   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:48:57.200955   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.200971   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:48:57.201030   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:48:57.229233   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.229245   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:48:57.229306   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:48:57.258367   36618 logs.go:274] 0 containers: []
	W0906 15:48:57.258379   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:48:57.258386   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:48:57.258394   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:48:57.271869   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:48:57.271881   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:48:59.326190   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054291416s)
	I0906 15:48:59.326348   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:48:59.326355   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:48:59.367821   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:48:59.367839   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:48:59.379672   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:48:59.379685   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:48:59.432111   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:01.932831   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:02.018145   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:02.048231   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.048244   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:02.048299   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:02.077507   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.077520   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:02.077580   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:02.106702   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.106713   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:02.106771   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:02.135555   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.135567   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:02.135631   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:02.164516   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.164529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:02.164588   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:02.191790   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.191803   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:02.191862   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:02.220273   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.220286   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:02.220351   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:02.249683   36618 logs.go:274] 0 containers: []
	W0906 15:49:02.249695   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:02.249702   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:02.249709   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:02.261264   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:02.261276   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:02.317306   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:02.317320   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:02.317326   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:02.333052   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:02.333066   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:49:04.387574   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054488465s)
	I0906 15:49:04.387694   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:49:04.387705   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:49:06.928014   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:49:07.015920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:49:07.044778   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.044791   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:49:07.044847   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:49:07.076121   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.076133   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:49:07.076187   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:49:07.105220   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.105233   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:49:07.105295   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:49:07.135579   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.135592   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:49:07.135649   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:49:07.173144   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.173156   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:49:07.173217   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:49:07.201600   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.201611   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:49:07.201668   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:49:07.230545   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.230557   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:49:07.230612   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:49:07.261070   36618 logs.go:274] 0 containers: []
	W0906 15:49:07.261082   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:49:07.261089   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:49:07.261099   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:49:07.272874   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:49:07.272894   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:49:07.325682   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:49:07.325698   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:49:07.325705   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:49:07.340738   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:49:07.340751   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:43:15 UTC, end at Tue 2022-09-06 22:49:09 UTC. --
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.350078663Z" level=info msg="ignoring event" container=4df5f446a56dcca28e1946ed92d2e66891a0833df9c4b409a02cf2802390e68a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.439793895Z" level=info msg="ignoring event" container=a595cade58a41868868ccf9184ce8772ff98183c6772fb07e6cd2aced028aedd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.506007062Z" level=info msg="ignoring event" container=a5c4cf0e4faf76b1b93d84809d448f9bb69c6528fcf32cb4185d5a4bd2601115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.622030519Z" level=info msg="ignoring event" container=2646311a73c58d6c5a3ae3f9c034cf97f75b96129bb3d7f9637f4fbc72844d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.692220768Z" level=info msg="ignoring event" container=81df2999200e0a690f0898ccd2c7cfb6243ec4d3b8e56d974270b3042f93e19c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.759552096Z" level=info msg="ignoring event" container=dfbbeec10ffbaf2a7a913d9246dbb592447d71794feae941ac5c18143cd324f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.832299685Z" level=info msg="ignoring event" container=1f0693f776848ad7602ff9d4a6ad3b688a75782c7a250620f7989666e9aea946 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:47:47 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:47:47.919298832Z" level=info msg="ignoring event" container=fdbe147768cecf48451ca47565f0f2803de4010768eae96522a94778caedccef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.725719172Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.725759575Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:12 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:12.727067919Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:13 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:13.247698754Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.300963455Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.481494706Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.562671068Z" level=info msg="ignoring event" container=2ca3845f19ae5805108af0ad9a0701cfd70498ad9a75e83f4ddc56f8860d4e22 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:18 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:18.609836356Z" level=info msg="ignoring event" container=3804df3cb07bf29599955f7fc54d63c9d879f3baa43fe55a0eec3cfc52ceb762 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:22 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:22.083622764Z" level=info msg="ignoring event" container=e27a0526083f54a65c04ba6f1b18a27acba4f3e3c8fd81e005e022fb57391d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:22 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:22.883038186Z" level=info msg="ignoring event" container=1fcf202fcdfbeb93cc684861bd69f29a9ff537b915cec520fb3e3f18d6ed0212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.944382485Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.944677438Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:48:27 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:48:27.946092759Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:49:07 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:49:07.146491207Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:49:07 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:49:07.146538612Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:49:07 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:49:07.168316260Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:49:08 no-preload-20220906154156-22187 dockerd[541]: time="2022-09-06T22:49:08.104480271Z" level=info msg="ignoring event" container=e501ee553340c6fa00c44a23ac173b0195ec66db975b1388e99c4b6b58f563be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	e501ee553340c       a90209bb39e3d                                                                                    2 seconds ago        Exited              dashboard-metrics-scraper   2                   92c3b6305000b
	8b8e2ca2957f1       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   52 seconds ago       Running             kubernetes-dashboard        0                   d3c142ba9c3bf
	11e8e600f961f       5185b96f0becf                                                                                    57 seconds ago       Running             coredns                     0                   e98be20667dfc
	210181832a52a       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   bbcf3683e916c
	5c40cf8f5a3f6       58a9a0c6d96f2                                                                                    58 seconds ago       Running             kube-proxy                  0                   0c77e9e329546
	ed6a7f025fdda       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   eaea5f607cbe4
	b108852ace062       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   d53cb8fdb8695
	60e8e52ff56a3       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   adca1e69ac77a
	900da704596f4       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   fe44d9b9e12ba
	
	* 
	* ==> coredns [11e8e600f961] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220906154156-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220906154156-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=no-preload-20220906154156-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_47_56_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:47:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220906154156-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:49:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 22:49:03 +0000   Tue, 06 Sep 2022 22:47:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220906154156-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                17e68662-37d6-4cb5-b265-48d4c864fb32
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-8kwg7                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     61s
	  kube-system                 etcd-no-preload-20220906154156-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-no-preload-20220906154156-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-no-preload-20220906154156-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-85lwm                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-no-preload-20220906154156-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-5c8fd5cf8-dsmkc                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-5qmp7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-4v92l                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 58s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientPID
	  Normal  NodeReady                73s   kubelet          Node no-preload-20220906154156-22187 status is now: NodeReady
	  Normal  RegisteredNode           61s   node-controller  Node no-preload-20220906154156-22187 event: Registered Node no-preload-20220906154156-22187 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node no-preload-20220906154156-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [60e8e52ff56a] <==
	* {"level":"info","ts":"2022-09-06T22:47:51.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:47:51.027Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:47:51.028Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:47:51.028Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:47:51.819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220906154156-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:47:51.820Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.821Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:47:51.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:47:51.823Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:47:51.826Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:49:10 up  1:05,  0 users,  load average: 0.53, 0.87, 1.02
	Linux no-preload-20220906154156-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b108852ace06] <==
	* I0906 22:47:54.843351       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 22:47:54.843381       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:47:55.103074       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:47:55.130887       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:47:55.163984       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 22:47:55.167455       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 22:47:55.168127       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:47:55.170964       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 22:47:55.864944       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:47:56.808409       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:47:56.815086       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 22:47:56.821382       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:47:56.886376       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:48:09.327233       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 22:48:09.500305       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:48:11.125193       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.64.120]
	I0906 22:48:11.903731       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.171.205]
	I0906 22:48:11.912968       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.187.198]
	W0906 22:48:11.924536       1 handler_proxy.go:102] no RequestInfo found in the context
	W0906 22:48:11.924561       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:48:11.924580       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:48:11.924586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0906 22:48:11.924607       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:48:11.925762       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [900da704596f] <==
	* I0906 22:48:09.855387       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:48:09.923483       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:48:09.923537       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:48:10.929140       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:48:11.012324       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0906 22:48:11.020786       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0906 22:48:11.033525       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-dsmkc"
	I0906 22:48:11.721881       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 22:48:11.727411       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.730885       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.732941       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	E0906 22:48:11.735039       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.735114       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.735135       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.740407       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:48:11.742893       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.742905       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.743320       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:48:11.743325       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:48:11.750942       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:48:11.751009       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:48:11.800961       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-5qmp7"
	I0906 22:48:11.802947       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-4v92l"
	E0906 22:49:03.524129       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 22:49:03.579654       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [5c40cf8f5a3f] <==
	* I0906 22:48:11.521118       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:48:11.521193       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:48:11.521229       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:48:11.543298       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:48:11.543363       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:48:11.543372       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:48:11.543382       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:48:11.543396       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:48:11.543485       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:48:11.543591       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:48:11.543598       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:48:11.543941       1 config.go:317] "Starting service config controller"
	I0906 22:48:11.543973       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:48:11.543989       1 config.go:444] "Starting node config controller"
	I0906 22:48:11.543992       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:48:11.544848       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:48:11.544875       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:48:11.644060       1 shared_informer.go:262] Caches are synced for node config
	I0906 22:48:11.644145       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:48:11.645250       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [ed6a7f025fdd] <==
	* W0906 22:47:53.929224       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:47:53.929303       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:47:53.929342       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:47:53.929388       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:47:53.930317       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 22:47:53.930333       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 22:47:53.930408       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:47:53.930453       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 22:47:53.930433       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 22:47:53.930555       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 22:47:53.930564       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930575       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:53.930655       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930688       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:53.930724       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:47:53.930730       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:47:53.930772       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 22:47:53.930830       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:47:53.930885       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:53.930898       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:54.805752       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:47:54.805958       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:47:54.936756       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 22:47:54.936818       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0906 22:47:55.225449       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:43:15 UTC, end at Tue 2022-09-06 22:49:11 UTC. --
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998198   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d8f\" (UniqueName: \"kubernetes.io/projected/1c03a38b-d8c6-44e4-8404-b8bb5cbad02c-kube-api-access-58d8f\") pod \"storage-provisioner\" (UID: \"1c03a38b-d8c6-44e4-8404-b8bb5cbad02c\") " pod="kube-system/storage-provisioner"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998237   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/14065c20-36c8-457b-a3ae-c7a4132e59f4-tmp-volume\") pod \"kubernetes-dashboard-54596f475f-4v92l\" (UID: \"14065c20-36c8-457b-a3ae-c7a4132e59f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-4v92l"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998257   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1c03a38b-d8c6-44e4-8404-b8bb5cbad02c-tmp\") pod \"storage-provisioner\" (UID: \"1c03a38b-d8c6-44e4-8404-b8bb5cbad02c\") " pod="kube-system/storage-provisioner"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998275   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ccf\" (UniqueName: \"kubernetes.io/projected/b58d2960-b28e-45dc-ad87-ce8a61130c78-kube-api-access-47ccf\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998357   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spphr\" (UniqueName: \"kubernetes.io/projected/6dc3f46c-764e-41ed-8bb1-d475e0fb346d-kube-api-access-spphr\") pod \"coredns-565d847f94-8kwg7\" (UID: \"6dc3f46c-764e-41ed-8bb1-d475e0fb346d\") " pod="kube-system/coredns-565d847f94-8kwg7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998479   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqcm\" (UniqueName: \"kubernetes.io/projected/63c69301-4caf-4664-9ea7-a02f276da821-kube-api-access-wxqcm\") pod \"dashboard-metrics-scraper-7b94984548-5qmp7\" (UID: \"63c69301-4caf-4664-9ea7-a02f276da821\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-5qmp7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998554   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b58d2960-b28e-45dc-ad87-ce8a61130c78-kube-proxy\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998629   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dc3f46c-764e-41ed-8bb1-d475e0fb346d-config-volume\") pod \"coredns-565d847f94-8kwg7\" (UID: \"6dc3f46c-764e-41ed-8bb1-d475e0fb346d\") " pod="kube-system/coredns-565d847f94-8kwg7"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998680   11051 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b58d2960-b28e-45dc-ad87-ce8a61130c78-xtables-lock\") pod \"kube-proxy-85lwm\" (UID: \"b58d2960-b28e-45dc-ad87-ce8a61130c78\") " pod="kube-system/kube-proxy-85lwm"
	Sep 06 22:49:04 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:04.998839   11051 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:06.152716   11051 request.go:601] Waited for 1.07830952s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.191571   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220906154156-22187\" already exists" pod="kube-system/etcd-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.356168   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-scheduler-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.608615   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-apiserver-no-preload-20220906154156-22187"
	Sep 06 22:49:06 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:06.769773   11051 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220906154156-22187\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220906154156-22187"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168815   11051 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168875   11051 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.168978   11051 kuberuntime_manager.go:862] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zgz7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c8fd5cf8-dsmkc_kube-system(aeeb9062-f6d0-49c4-b625-66e11226d676): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:07.169004   11051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c8fd5cf8-dsmkc" podUID=aeeb9062-f6d0-49c4-b625-66e11226d676
	Sep 06 22:49:07 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:07.956286   11051 scope.go:115] "RemoveContainer" containerID="1fcf202fcdfbeb93cc684861bd69f29a9ff537b915cec520fb3e3f18d6ed0212"
	Sep 06 22:49:08 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:08.128443   11051 scope.go:115] "RemoveContainer" containerID="e501ee553340c6fa00c44a23ac173b0195ec66db975b1388e99c4b6b58f563be"
	Sep 06 22:49:08 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:08.128643   11051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b94984548-5qmp7_kubernetes-dashboard(63c69301-4caf-4664-9ea7-a02f276da821)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-5qmp7" podUID=63c69301-4caf-4664-9ea7-a02f276da821
	Sep 06 22:49:09 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:09.140235   11051 scope.go:115] "RemoveContainer" containerID="1fcf202fcdfbeb93cc684861bd69f29a9ff537b915cec520fb3e3f18d6ed0212"
	Sep 06 22:49:09 no-preload-20220906154156-22187 kubelet[11051]: I0906 22:49:09.140424   11051 scope.go:115] "RemoveContainer" containerID="e501ee553340c6fa00c44a23ac173b0195ec66db975b1388e99c4b6b58f563be"
	Sep 06 22:49:09 no-preload-20220906154156-22187 kubelet[11051]: E0906 22:49:09.140540   11051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b94984548-5qmp7_kubernetes-dashboard(63c69301-4caf-4664-9ea7-a02f276da821)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-5qmp7" podUID=63c69301-4caf-4664-9ea7-a02f276da821
	
	* 
	* ==> kubernetes-dashboard [8b8e2ca2957f] <==
	* 2022/09/06 22:48:18 Using namespace: kubernetes-dashboard
	2022/09/06 22:48:18 Using in-cluster config to connect to apiserver
	2022/09/06 22:48:18 Using secret token for csrf signing
	2022/09/06 22:48:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 22:48:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 22:48:18 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 22:48:18 Generating JWE encryption key
	2022/09/06 22:48:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 22:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 22:48:18 Initializing JWE encryption key from synchronized object
	2022/09/06 22:48:18 Creating in-cluster Sidecar client
	2022/09/06 22:48:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:48:18 Serving insecurely on HTTP port: 9090
	2022/09/06 22:49:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:48:18 Starting overwatch
	
	* 
	* ==> storage-provisioner [210181832a52] <==
	* I0906 22:48:12.417194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:48:12.436616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:48:12.436681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:48:12.443991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:48:12.444167       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583!
	I0906 22:48:12.444259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d43032c-4079-4ede-a0a8-32450d421e51", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583 became leader
	I0906 22:48:12.544411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220906154156-22187_c9485597-3f46-4251-8079-a4fa89570583!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-dsmkc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc: exit status 1 (58.225358ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-dsmkc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220906154156-22187 describe pod metrics-server-5c8fd5cf8-dsmkc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (42.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (42.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220906154915-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187: exit status 2 (16.070185239s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187: exit status 2 (16.078203882s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220906154915-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220906154915-22187
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220906154915-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68",
	        "Created": "2022-09-06T22:49:21.916989006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270152,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:50:24.656526964Z",
	            "FinishedAt": "2022-09-06T22:50:22.661548267Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/hosts",
	        "LogPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68-json.log",
	        "Name": "/default-k8s-different-port-20220906154915-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220906154915-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220906154915-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220906154915-22187",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220906154915-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220906154915-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220906154915-22187",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220906154915-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "205be613d03681993fd693cb3f7845a436f1438eec75dbf09596e296a882a445",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59715"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59716"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59717"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59718"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59719"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/205be613d036",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220906154915-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3d86188a560b",
	                        "default-k8s-different-port-20220906154915-22187"
	                    ],
	                    "NetworkID": "b1d553093d415a7df6a4f6e69bc112c62a388e7ca5dec486e6bc316fc8b58dbb",
	                    "EndpointID": "405256d7cd2f9d9b492033b06aae637bd8522b00cfa2994816b8bd88c9e407f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220906154915-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220906154915-22187 logs -n 25: (2.984918057s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187                     | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220906152522-22187                    | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | kubenet-20220906152522-22187                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:42 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:45 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:50:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:50:23.383928   37212 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:50:23.384105   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384110   37212 out.go:309] Setting ErrFile to fd 2...
	I0906 15:50:23.384114   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384226   37212 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:50:23.384693   37212 out.go:303] Setting JSON to false
	I0906 15:50:23.400568   37212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10194,"bootTime":1662494429,"procs":338,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:50:23.400663   37212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:50:23.422701   37212 out.go:177] * [default-k8s-different-port-20220906154915-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:50:23.444975   37212 notify.go:193] Checking for updates...
	I0906 15:50:23.466707   37212 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:50:23.488647   37212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:23.509671   37212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:50:23.530748   37212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:50:23.552752   37212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:50:23.575417   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:23.576052   37212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:50:23.643500   37212 docker.go:137] docker version: linux-20.10.17
	I0906 15:50:23.643647   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.772962   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.713734774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.816484   37212 out.go:177] * Using the docker driver based on existing profile
	I0906 15:50:23.837697   37212 start.go:284] selected driver: docker
	I0906 15:50:23.837744   37212 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port
-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:23.837918   37212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:50:23.841270   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.972563   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.911532634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.972720   37212 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:50:23.972740   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:23.972752   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:23.972759   37212 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:24.016175   37212 out.go:177] * Starting control plane node default-k8s-different-port-20220906154915-22187 in cluster default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.037370   37212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:50:24.058389   37212 out.go:177] * Pulling base image ...
	I0906 15:50:24.100618   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:24.100693   37212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:50:24.100700   37212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:50:24.100731   37212 cache.go:57] Caching tarball of preloaded images
	I0906 15:50:24.100971   37212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:50:24.100991   37212 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:50:24.102052   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.177644   37212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:50:24.177679   37212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:50:24.177695   37212 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:50:24.177751   37212 start.go:364] acquiring machines lock for default-k8s-different-port-20220906154915-22187: {Name:mke86da387e8e60d201d2bf660ca2b291cded1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:50:24.177833   37212 start.go:368] acquired machines lock for "default-k8s-different-port-20220906154915-22187" in 64.558µs
	I0906 15:50:24.177857   37212 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:50:24.177868   37212 fix.go:55] fixHost starting: 
	I0906 15:50:24.178075   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.241080   37212 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220906154915-22187: state=Stopped err=<nil>
	W0906 15:50:24.241106   37212 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:50:24.289728   37212 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220906154915-22187" ...
	I0906 15:50:24.310938   37212 cli_runner.go:164] Run: docker start default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.652464   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.717004   37212 kic.go:415] container "default-k8s-different-port-20220906154915-22187" state is running.
	I0906 15:50:24.717609   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.788739   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.789155   37212 machine.go:88] provisioning docker machine ...
	I0906 15:50:24.789182   37212 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220906154915-22187"
	I0906 15:50:24.789253   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.857628   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:24.857848   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:24.857870   37212 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220906154915-22187 && echo "default-k8s-different-port-20220906154915-22187" | sudo tee /etc/hostname
	I0906 15:50:24.982000   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220906154915-22187
	
	I0906 15:50:24.982089   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.047360   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.047575   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.047593   37212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220906154915-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220906154915-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220906154915-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:50:25.159181   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:25.159203   37212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:50:25.159232   37212 ubuntu.go:177] setting up certificates
	I0906 15:50:25.159243   37212 provision.go:83] configureAuth start
	I0906 15:50:25.159305   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.227062   37212 provision.go:138] copyHostCerts
	I0906 15:50:25.227183   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:50:25.227193   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:50:25.227287   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:50:25.227513   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:50:25.227523   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:50:25.227599   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:50:25.227736   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:50:25.227742   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:50:25.227797   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:50:25.227954   37212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220906154915-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220906154915-22187]
	I0906 15:50:25.387707   37212 provision.go:172] copyRemoteCerts
	I0906 15:50:25.387773   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:50:25.387820   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.453896   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:25.538722   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:50:25.559997   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0906 15:50:25.578754   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:50:25.599062   37212 provision.go:86] duration metric: configureAuth took 439.804217ms
	I0906 15:50:25.599076   37212 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:50:25.599255   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:25.599313   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.664450   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.664592   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.664602   37212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:50:25.777980   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:50:25.777993   37212 ubuntu.go:71] root file system type: overlay
	I0906 15:50:25.778137   37212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:50:25.778210   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.842319   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.842469   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.842532   37212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:50:25.964564   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:50:25.964654   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.028806   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:26.028945   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:26.028959   37212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:50:26.145650   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:26.145668   37212 machine.go:91] provisioned docker machine in 1.356498564s
	I0906 15:50:26.145678   37212 start.go:300] post-start starting for "default-k8s-different-port-20220906154915-22187" (driver="docker")
	I0906 15:50:26.145685   37212 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:50:26.145738   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:50:26.145781   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.214583   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.297685   37212 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:50:26.301530   37212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:50:26.301546   37212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:50:26.301553   37212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:50:26.301557   37212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:50:26.301567   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:50:26.301695   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:50:26.301841   37212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:50:26.301982   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:50:26.309414   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:26.326489   37212 start.go:303] post-start completed in 180.79968ms
	I0906 15:50:26.326571   37212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:50:26.326625   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.391005   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.472459   37212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:50:26.476963   37212 fix.go:57] fixHost completed within 2.299088562s
	I0906 15:50:26.476980   37212 start.go:83] releasing machines lock for "default-k8s-different-port-20220906154915-22187", held for 2.299131722s
	I0906 15:50:26.477075   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543830   37212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:50:26.543849   37212 ssh_runner.go:195] Run: systemctl --version
	I0906 15:50:26.543919   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543933   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.610348   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.610521   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.738898   37212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:50:26.748821   37212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:50:26.748877   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:50:26.760220   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:50:26.772960   37212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:50:26.840012   37212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:50:26.910847   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:26.983057   37212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:50:27.222145   37212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:50:27.292399   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:27.361398   37212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:50:27.370829   37212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:50:27.370897   37212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:50:27.374774   37212 start.go:471] Will wait 60s for crictl version
	I0906 15:50:27.374820   37212 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:50:27.478851   37212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:50:27.478919   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:27.513172   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:22.741213   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.747765   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:22.747822   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:22.780713   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.780735   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:22.780743   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:22.780750   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:22.825622   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:22.825640   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:22.849234   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:22.849253   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:22.904336   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:22.904346   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:22.904353   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:22.917771   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:22.917784   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:24.972764   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054937206s)
	I0906 15:50:27.473071   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:27.516271   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:27.546171   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.546183   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:27.546241   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:27.576500   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.576511   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:27.576565   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:27.605881   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.605898   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:27.605968   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:27.634722   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.634737   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:27.634806   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:27.682458   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.682471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:27.682562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:27.715777   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.715790   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:27.715848   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:27.573367   37212 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:50:27.573443   37212 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220906154915-22187 dig +short host.docker.internal
	I0906 15:50:27.702910   37212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:50:27.703141   37212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:50:27.707491   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.718288   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:27.784455   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:27.784543   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.816064   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.816080   37212 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:50:27.816149   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.847540   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.847561   37212 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:50:27.847634   37212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:50:27.921264   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:27.921277   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:27.921293   37212 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:50:27.921305   37212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220906154915-22187 NodeName:default-k8s-different-port-20220906154915-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:50:27.921421   37212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220906154915-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:50:27.921503   37212 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220906154915-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0906 15:50:27.921560   37212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:50:27.928695   37212 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:50:27.928754   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:50:27.935705   37212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0906 15:50:27.947621   37212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:50:27.959675   37212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0906 15:50:27.971770   37212 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:50:27.975353   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.984747   37212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187 for IP: 192.168.76.2
	I0906 15:50:27.984863   37212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:50:27.984928   37212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:50:27.985007   37212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.key
	I0906 15:50:27.985064   37212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key.31bdca25
	I0906 15:50:27.985114   37212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key
	I0906 15:50:27.985323   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:50:27.985358   37212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:50:27.985366   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:50:27.985406   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:50:27.985436   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:50:27.985463   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:50:27.985530   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:27.986135   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:50:28.002943   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 15:50:28.019502   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:50:28.036140   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:50:28.052467   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:50:28.068669   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:50:28.085037   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:50:28.101413   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:50:28.117752   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:50:28.134563   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:50:28.151206   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:50:28.167822   37212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:50:28.179908   37212 ssh_runner.go:195] Run: openssl version
	I0906 15:50:28.185084   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:50:28.192667   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196560   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196608   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.201652   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:50:28.208974   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:50:28.216562   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220441   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220490   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.225402   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:50:28.232504   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:50:28.240088   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243702   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243751   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.248732   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:50:28.255841   37212 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-2218
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:28.255949   37212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:28.284221   37212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:50:28.291767   37212 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:50:28.291783   37212 kubeadm.go:627] restartCluster start
	I0906 15:50:28.291828   37212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:50:28.298403   37212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.298458   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:28.362342   37212 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220906154915-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:28.362504   37212 kubeconfig.go:127] "default-k8s-different-port-20220906154915-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:50:28.362854   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:50:28.364281   37212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:50:28.371727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.371785   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.380211   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:27.747228   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.747241   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:27.747297   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:27.779174   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.779190   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:27.779197   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:27.779206   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:27.794916   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:27.794934   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:29.852358   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057404132s)
	I0906 15:50:29.852500   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:29.852510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:29.890521   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:29.890535   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:29.901840   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:29.901851   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:29.954554   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.455578   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.518172   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:32.548482   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.548495   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:32.548562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:32.581388   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.581401   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:32.581462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:32.613423   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.613440   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:32.613516   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:32.646792   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.646806   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:32.646886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:32.679058   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.679070   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:32.679132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:32.706281   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.706294   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:32.706349   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:28.580493   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.580582   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.588946   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.782354   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.782515   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.792901   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.980326   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.980414   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.990348   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.180465   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.180555   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.190991   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.380727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.380854   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.391256   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.582341   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.582484   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.592874   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.782426   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.782560   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.792358   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.980694   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.980808   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.991278   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.180949   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.181077   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.190362   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.380565   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.380676   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.390714   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.581547   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.581695   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.591408   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.781668   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.781744   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.792474   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.982446   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.982554   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.992872   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.182373   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.182496   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.193523   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.382353   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.382500   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.392561   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.392570   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.392611   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.400629   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.400643   37212 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:50:31.400653   37212 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:50:31.400714   37212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:31.431389   37212 docker.go:443] Stopping containers: [445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2]
	I0906 15:50:31.431462   37212 ssh_runner.go:195] Run: docker stop 445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2
	I0906 15:50:31.460862   37212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:50:31.471093   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:50:31.478456   37212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep  6 22:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:49 /etc/kubernetes/scheduler.conf
	
	I0906 15:50:31.478500   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 15:50:31.485784   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 15:50:31.493288   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.500416   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.500477   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.507449   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 15:50:31.515558   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.515611   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:50:31.523180   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530863   37212 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530878   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:31.576875   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.388889   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.520033   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.572876   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.645195   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:50:32.645266   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.159857   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.740556   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.745575   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:32.745632   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:32.775009   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.775021   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:32.775028   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:32.775035   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:32.815094   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:32.815109   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:32.827508   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:32.827521   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:32.892093   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.892116   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:32.892127   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:32.905761   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:32.905772   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:34.959908   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054119058s)
	I0906 15:50:37.461003   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:37.516871   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:37.552075   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.552087   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:37.552148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:37.588429   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.588444   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:37.588519   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:37.621349   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.621361   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:37.621443   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:37.653420   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.653435   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:37.653497   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:37.684456   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.684471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:37.684530   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:37.723554   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.723570   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:37.723702   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:33.657999   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.704057   37212 api_server.go:71] duration metric: took 1.058865284s to wait for apiserver process to appear ...
	I0906 15:50:33.704096   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:50:33.704112   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:33.705208   37212 api_server.go:256] stopped: https://127.0.0.1:59719/healthz: Get "https://127.0.0.1:59719/healthz": EOF
	I0906 15:50:34.205313   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.341332   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:50:36.341358   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:50:36.705764   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.711926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:36.711938   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.205327   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.211521   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:37.211533   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.705372   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.710926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:50:37.717670   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:50:37.717683   37212 api_server.go:130] duration metric: took 4.013570504s to wait for apiserver health ...
	I0906 15:50:37.717690   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:37.717696   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:37.717709   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:50:37.726280   37212 system_pods.go:59] 8 kube-system pods found
	I0906 15:50:37.726299   37212 system_pods.go:61] "coredns-565d847f94-wkvwz" [31b21348-6685-429e-8101-a138d6f44c5a] Running
	I0906 15:50:37.726311   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [06c9eba4-2eb0-4b4a-8923-14badd5235b3] Running
	I0906 15:50:37.726324   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [81942a28-8b69-4b86-80be-4c3d54e8c71e] Running
	I0906 15:50:37.726333   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [c814ed45-a563-4476-adc1-e14de96156f8] Running
	I0906 15:50:37.726343   37212 system_pods.go:61] "kube-proxy-t7vx8" [019bd2fb-a0da-477f-9df3-74757d6d787d] Running
	I0906 15:50:37.726356   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [9434ace8-3845-48cc-8fff-67183116a1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:50:37.726364   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-wnhzc" [23e9d7cc-1aca-4e2e-8ea9-ba6493231ca0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:50:37.726368   37212 system_pods.go:61] "storage-provisioner" [54518a3e-e36f-4f53-b169-0a62c4eabd66] Running
	I0906 15:50:37.726372   37212 system_pods.go:74] duration metric: took 8.658942ms to wait for pod list to return data ...
	I0906 15:50:37.726378   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:50:37.729378   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:50:37.729393   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:50:37.729406   37212 node_conditions.go:105] duration metric: took 3.024346ms to run NodePressure ...
	I0906 15:50:37.729419   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:37.929739   37212 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937679   37212 kubeadm.go:778] kubelet initialised
	I0906 15:50:37.937700   37212 kubeadm.go:779] duration metric: took 7.945238ms waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937713   37212 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:50:37.946600   37212 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953168   37212 pod_ready.go:92] pod "coredns-565d847f94-wkvwz" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.953178   37212 pod_ready.go:81] duration metric: took 6.561071ms waiting for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953187   37212 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996891   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.996900   37212 pod_ready.go:81] duration metric: took 43.709214ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996907   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002735   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.002745   37212 pod_ready.go:81] duration metric: took 5.833437ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002752   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120788   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.120798   37212 pod_ready.go:81] duration metric: took 118.040762ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120805   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.763280   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.763293   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:37.763360   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:37.800010   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.800025   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:37.800033   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:37.800042   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:37.848311   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:37.848332   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:37.863600   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:37.863623   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:37.940260   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:37.940278   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:37.940317   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:37.957971   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:37.957982   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:40.011474   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053472357s)
	I0906 15:50:42.513826   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:38.521134   37212 pod_ready.go:92] pod "kube-proxy-t7vx8" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.521144   37212 pod_ready.go:81] duration metric: took 400.332006ms waiting for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.521150   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:40.932176   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:43.018269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:43.046980   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.046992   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:43.047050   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:43.075170   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.075183   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:43.075237   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:43.104514   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.104526   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:43.104582   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:43.133882   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.133894   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:43.133953   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:43.162356   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.162368   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:43.162431   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:43.197634   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.197648   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:43.197714   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:43.229904   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.229916   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:43.229973   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:43.261120   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.261132   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:43.261140   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:43.261146   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:43.300082   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:43.300097   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:43.312225   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:43.312238   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:43.365232   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:43.365242   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:43.365249   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:43.380452   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:43.380465   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:45.435023   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054541052s)
	I0906 15:50:43.431899   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:45.931027   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:47.430333   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:47.430345   37212 pod_ready.go:81] duration metric: took 8.909165101s waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.430351   37212 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.936850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:48.016371   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:48.047334   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.047346   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:48.047400   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:48.079442   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.079453   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:48.079507   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:48.107817   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.107829   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:48.107887   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:48.136570   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.136583   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:48.136641   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:48.165367   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.165380   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:48.165438   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:48.193686   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.193699   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:48.193758   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:48.222001   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.222015   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:48.222073   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:48.249978   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.249990   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:48.249998   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:48.250005   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:48.287143   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:48.287158   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:48.298409   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:48.298422   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:48.356790   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:48.356801   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:48.356815   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:48.370256   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:48.370268   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:50.421619   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051333533s)
	I0906 15:50:49.443659   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:51.942260   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:52.922613   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:53.016799   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:53.048909   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.048921   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:53.048980   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:53.077529   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.077542   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:53.077606   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:53.105518   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.105529   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:53.105586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:53.135007   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.135020   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:53.135079   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:53.163328   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.163341   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:53.163396   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:53.191132   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.191143   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:53.191199   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:53.219655   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.219668   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:53.219724   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:53.248534   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.248547   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:53.248554   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:53.248561   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:53.260251   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:53.260264   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:53.317573   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:53.317586   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:53.317592   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:53.332188   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:53.332202   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:55.385124   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052904546s)
	I0906 15:50:55.385230   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:55.385237   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:53.942333   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:55.942494   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:57.926420   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:58.017776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:58.047321   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.047333   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:58.047397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:58.075870   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.075882   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:58.075939   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:58.106804   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.106816   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:58.106874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:58.136263   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.136276   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:58.136333   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:58.165517   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.165529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:58.165586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:58.194182   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.194194   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:58.194249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:58.222862   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.222874   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:58.222942   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:58.254161   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.254174   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:58.254181   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:58.254192   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:58.307613   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:58.307626   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:58.307633   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:58.321788   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:58.321800   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:00.373491   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051674038s)
	I0906 15:51:00.373598   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:00.373605   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:00.412768   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:00.412783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:58.442534   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:00.942919   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:02.926085   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:03.016795   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:03.045519   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.045535   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:03.045594   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:03.077002   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.077014   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:03.077070   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:03.106731   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.106742   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:03.106803   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:03.137065   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.137078   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:03.137139   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:03.165960   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.165972   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:03.166031   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:03.194538   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.194552   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:03.194615   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:03.223613   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.223625   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:03.223692   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:03.252621   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.252634   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:03.252642   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:03.252649   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:03.293046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:03.293061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:03.305992   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:03.306004   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:03.359768   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:03.359777   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:03.359783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:03.374067   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:03.374080   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:05.428493   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054394661s)
	I0906 15:51:03.440923   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:05.940922   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:07.930843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:08.018364   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:08.050342   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.050356   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:08.050414   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:08.080802   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.080815   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:08.080874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:08.110557   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.110570   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:08.110626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:08.140588   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.140601   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:08.140658   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:08.171464   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.171477   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:08.171544   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:08.200615   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.200628   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:08.200684   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:08.231364   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.231376   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:08.231442   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:08.265358   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.265372   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:08.265379   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:08.265386   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:08.279229   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:08.279242   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:10.332629   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053369757s)
	I0906 15:51:10.332737   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:10.332744   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:10.371046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:10.371061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:10.382429   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:10.382441   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:10.434114   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:08.442971   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:10.943493   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:12.935172   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:13.016810   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:13.048233   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.048247   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:13.048307   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:13.076100   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.076112   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:13.076167   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:13.105312   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.105329   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:13.105397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:13.134422   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.134434   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:13.134509   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:13.163088   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.163100   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:13.163156   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:13.192169   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.192181   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:13.192249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:13.221272   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.221284   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:13.221342   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:13.249896   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.249907   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:13.249914   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:13.249921   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:13.261316   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:13.261328   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:13.316693   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:13.316704   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:13.316710   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:13.333605   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:13.333618   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:15.389543   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05590645s)
	I0906 15:51:15.389649   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:15.389657   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:13.441127   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:15.442305   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.940913   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.929544   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:18.017317   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:18.049613   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.049625   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:18.049682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:18.078124   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.078137   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:18.078194   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:18.106846   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.106859   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:18.106916   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:18.136908   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.136920   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:18.136977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:18.165211   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.165223   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:18.165281   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:18.194317   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.194329   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:18.194387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:18.225530   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.225543   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:18.225602   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:18.254758   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.254770   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:18.254777   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:18.254783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:18.296280   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:18.296292   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:18.307948   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:18.307960   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:18.361906   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:18.361916   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:18.361922   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:18.376020   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:18.376033   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:20.430813   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054762389s)
	I0906 15:51:19.942784   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.441622   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.931094   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:23.016599   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:23.047383   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.047395   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:23.047452   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:23.076558   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.076570   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:23.076629   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:23.105158   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.105174   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:23.105249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:23.134903   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.134915   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:23.134970   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:23.163722   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.163737   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:23.163797   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:23.193082   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.193103   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:23.193179   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:23.223206   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.223218   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:23.223279   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:23.253242   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.253254   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:23.253264   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:23.253273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:23.269441   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:23.269454   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:25.324087   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054614433s)
	I0906 15:51:25.324197   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:25.324204   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:25.362495   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:25.362508   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:25.373850   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:25.373864   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:25.427416   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:24.443600   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:26.943789   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:27.927755   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:28.018461   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:28.049083   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.049096   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:28.049151   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:28.076915   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.076926   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:28.076984   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:28.105609   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.105624   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:28.105682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:28.135415   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.135427   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:28.135483   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:28.165044   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.165057   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:28.165117   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:28.194961   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.194972   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:28.195027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:28.224560   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.224572   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:28.224626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:28.253940   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.253953   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:28.253961   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:28.253970   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:28.293324   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:28.293338   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:28.304502   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:28.304515   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:28.358820   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:28.358831   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:28.358838   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:28.372433   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:28.372444   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:30.425146   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052684469s)
	I0906 15:51:29.442830   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:31.940449   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:32.927175   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:33.017341   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:33.048887   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.048900   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:33.048957   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:33.077441   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.077452   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:33.077514   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:33.106906   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.106919   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:33.106981   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:33.136315   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.136327   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:33.136384   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:33.164846   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.164859   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:33.164920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:33.210609   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.210620   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:33.210680   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:33.242201   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.242213   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:33.242269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:33.270214   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.270226   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:33.270233   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:33.270240   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:33.310549   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:33.310565   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:33.322387   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:33.322400   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:33.374793   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:33.374804   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:33.374812   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:33.388065   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:33.388077   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:35.437468   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04937256s)
	I0906 15:51:33.941085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:36.442094   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:37.937790   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:37.948009   36618 kubeadm.go:631] restartCluster took 4m5.383312357s
	W0906 15:51:37.948093   36618 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0906 15:51:37.948113   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:51:38.373075   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:51:38.382614   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:51:38.390078   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:51:38.390124   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:51:38.397462   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:51:38.397491   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:51:38.444468   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:51:38.444514   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:51:38.751851   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:51:38.751951   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:51:38.752044   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:51:39.022935   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:51:39.023421   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:51:39.030200   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:51:39.096240   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:51:39.120068   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:51:39.120143   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:51:39.120223   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:51:39.120334   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:51:39.120397   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:51:39.120462   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:51:39.120529   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:51:39.120590   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:51:39.120645   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:51:39.120727   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:51:39.120792   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:51:39.120833   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:51:39.120892   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:51:39.515774   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:51:39.628999   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:51:39.816570   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:51:39.960203   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:51:39.960886   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:51:40.003202   36618 out.go:204]   - Booting up control plane ...
	I0906 15:51:40.003301   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:51:40.003379   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:51:40.003447   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:51:40.003511   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:51:40.003627   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:51:38.941689   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:41.443572   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:43.941795   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:46.441286   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:48.941320   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:51.442966   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:53.940073   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:55.940873   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:57.943480   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:00.441774   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:02.940658   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:04.940941   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:06.943633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:09.443762   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:11.940301   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:13.941452   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:15.941955   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:19.941067   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:52:19.941616   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:19.941780   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:18.443378   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:20.941205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:22.944080   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:24.939499   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:24.939741   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:25.440548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:27.441072   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:29.940396   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:31.942049   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:34.933630   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:34.933937   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:33.942419   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:36.444518   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:38.941160   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:40.941401   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:42.942085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:45.442441   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:47.940847   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:49.943953   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:52.441492   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:54.920474   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:54.920618   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:54.940040   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:56.941544   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:58.943275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:01.440638   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:03.441633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:05.940226   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:07.941507   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:10.440810   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:12.440867   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:14.441996   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:16.943539   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:19.441181   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:21.443341   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:23.443498   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:25.942678   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:27.943717   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:30.442290   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:32.941144   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:34.893294   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:53:34.893561   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:53:34.893577   36618 kubeadm.go:317] 
	I0906 15:53:34.893622   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:53:34.893683   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:53:34.893694   36618 kubeadm.go:317] 
	I0906 15:53:34.893731   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:53:34.893787   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:53:34.893917   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:53:34.893925   36618 kubeadm.go:317] 
	I0906 15:53:34.894045   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:53:34.894099   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:53:34.894131   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:53:34.894142   36618 kubeadm.go:317] 
	I0906 15:53:34.894228   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:53:34.894312   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:53:34.894377   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:53:34.894411   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:53:34.894474   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:53:34.894503   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:53:34.897717   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:53:34.897844   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:53:34.897942   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:53:34.898018   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:53:34.898086   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:53:34.898216   36618 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:53:34.898243   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:53:35.322770   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:53:35.332350   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:53:35.332397   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:53:35.340038   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:53:35.340060   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:53:35.385462   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:53:35.385503   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:53:35.695132   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:53:35.695219   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:53:35.695302   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:53:35.979308   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:53:35.979962   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:53:35.986584   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:53:36.049897   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:53:36.071432   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:53:36.071511   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:53:36.071599   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:53:36.071705   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:53:36.071754   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:53:36.071836   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:53:36.071932   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:53:36.072028   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:53:36.072072   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:53:36.072132   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:53:36.072207   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:53:36.072239   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:53:36.072293   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:53:36.386098   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:53:36.481839   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:53:36.735962   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:53:36.848356   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:53:36.849031   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:53:36.870925   36618 out.go:204]   - Booting up control plane ...
	I0906 15:53:36.871084   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:53:36.871201   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:53:36.871311   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:53:36.871457   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:53:36.871744   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:53:35.440714   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:37.441318   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:39.441654   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:41.442159   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:43.940095   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:45.940829   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:47.941618   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:50.441918   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:52.940878   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:54.943528   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:56.943592   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:59.442374   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:01.443183   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:03.944275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:06.442342   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:08.942198   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:11.442663   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:16.829056   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:54:16.829917   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:16.830124   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:13.444236   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:15.941133   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:17.942335   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:21.827690   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:21.827848   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:20.442403   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:22.941548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:24.942579   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:27.441632   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.820981   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:31.821186   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:29.444387   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.942340   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:34.441535   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:36.442205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:38.943078   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:41.441772   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:43.940702   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:45.941793   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:47.436849   37212 pod_ready.go:81] duration metric: took 4m0.005822558s waiting for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	E0906 15:54:47.436870   37212 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 15:54:47.436887   37212 pod_ready.go:38] duration metric: took 4m9.498472217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:54:47.436919   37212 kubeadm.go:631] restartCluster took 4m19.144412803s
	W0906 15:54:47.437043   37212 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 15:54:47.437069   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 15:54:51.743270   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.306176563s)
	I0906 15:54:51.743330   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:54:51.752980   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:54:51.760278   37212 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:54:51.760326   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:54:51.767387   37212 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:54:51.767414   37212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:54:51.808770   37212 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 15:54:51.808802   37212 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:54:51.904557   37212 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:54:51.904648   37212 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:54:51.904725   37212 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:54:52.025732   37212 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:54:52.050514   37212 out.go:204]   - Generating certificates and keys ...
	I0906 15:54:52.050582   37212 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:54:52.050668   37212 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:54:52.050742   37212 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:54:52.050789   37212 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:54:52.050842   37212 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:54:52.050887   37212 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:54:52.050939   37212 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:54:52.050986   37212 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:54:52.051056   37212 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:54:52.051129   37212 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:54:52.051161   37212 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:54:52.051204   37212 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:54:52.104655   37212 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:54:52.266933   37212 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:54:52.455099   37212 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:54:52.599889   37212 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:54:52.611289   37212 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:54:52.611867   37212 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:54:52.611907   37212 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:54:52.691695   37212 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:54:51.807304   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:51.807458   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:52.713079   37212 out.go:204]   - Booting up control plane ...
	I0906 15:54:52.713174   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:54:52.713236   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:54:52.713297   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:54:52.713374   37212 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:54:52.713513   37212 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:54:58.196526   37212 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503547 seconds
	I0906 15:54:58.196654   37212 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 15:54:58.203434   37212 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 15:54:58.718698   37212 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 15:54:58.718859   37212 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220906154915-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 15:54:59.224635   37212 kubeadm.go:317] [bootstrap-token] Using token: g5os1h.xfjbuvdd1xawa0ky
	I0906 15:54:59.261788   37212 out.go:204]   - Configuring RBAC rules ...
	I0906 15:54:59.262049   37212 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 15:54:59.262337   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 15:54:59.268841   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 15:54:59.270852   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 15:54:59.272955   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 15:54:59.274702   37212 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 15:54:59.281328   37212 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 15:54:59.432647   37212 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 15:54:59.632647   37212 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 15:54:59.633705   37212 kubeadm.go:317] 
	I0906 15:54:59.633803   37212 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 15:54:59.633816   37212 kubeadm.go:317] 
	I0906 15:54:59.633881   37212 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 15:54:59.633888   37212 kubeadm.go:317] 
	I0906 15:54:59.633907   37212 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 15:54:59.633950   37212 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 15:54:59.633984   37212 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 15:54:59.633989   37212 kubeadm.go:317] 
	I0906 15:54:59.634058   37212 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 15:54:59.634067   37212 kubeadm.go:317] 
	I0906 15:54:59.634138   37212 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 15:54:59.634148   37212 kubeadm.go:317] 
	I0906 15:54:59.634185   37212 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 15:54:59.634235   37212 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 15:54:59.634291   37212 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 15:54:59.634298   37212 kubeadm.go:317] 
	I0906 15:54:59.634350   37212 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 15:54:59.634399   37212 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 15:54:59.634404   37212 kubeadm.go:317] 
	I0906 15:54:59.634457   37212 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634532   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 15:54:59.634554   37212 kubeadm.go:317] 	--control-plane 
	I0906 15:54:59.634562   37212 kubeadm.go:317] 
	I0906 15:54:59.634628   37212 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 15:54:59.634634   37212 kubeadm.go:317] 
	I0906 15:54:59.634703   37212 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634778   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:54:59.637971   37212 kubeadm.go:317] W0906 22:54:51.815271    7827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:54:59.638087   37212 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:54:59.638192   37212 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:54:59.638305   37212 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:54:59.638322   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:54:59.638333   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:54:59.638353   37212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:54:59.638418   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.638453   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=default-k8s-different-port-20220906154915-22187 minikube.k8s.io/updated_at=2022_09_06T15_54_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.652946   37212 ops.go:34] apiserver oom_adj: -16
	I0906 15:54:59.765132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.356297   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.855510   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.356044   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.855680   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.357560   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.855496   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.356576   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.857064   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.356922   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.855648   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.355509   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.856812   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.356378   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.856487   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.357002   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.855628   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.357475   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.855615   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.356132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.856796   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.355518   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.855528   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.356121   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.855538   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.915907   37212 kubeadm.go:1046] duration metric: took 12.277509448s to wait for elevateKubeSystemPrivileges.
	I0906 15:55:11.915924   37212 kubeadm.go:398] StartCluster complete in 4m43.659305517s
	I0906 15:55:11.915940   37212 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:11.916016   37212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:55:11.916547   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:12.432639   37212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220906154915-22187" rescaled to 1
	I0906 15:55:12.432672   37212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:55:12.432680   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:55:12.432706   37212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:55:12.456099   37212 out.go:177] * Verifying Kubernetes components...
	I0906 15:55:12.432831   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:55:12.456163   37212 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456171   37212 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456174   37212 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456176   37212 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.499149   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 15:55:12.529511   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:12.529526   37212 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529528   37212 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529535   37212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220906154915-22187"
	W0906 15:55:12.529545   37212 addons.go:162] addon dashboard should already be in state true
	W0906 15:55:12.529553   37212 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:55:12.529626   37212 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529660   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	W0906 15:55:12.529689   37212 addons.go:162] addon metrics-server should already be in state true
	I0906 15:55:12.529658   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.529766   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.530198   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531221   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531900   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.532011   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.549127   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.680879   37212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:55:12.640947   37212 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.661070   37212 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	W0906 15:55:12.680984   37212 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:55:12.718152   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.718210   37212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:12.775898   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:55:12.755048   37212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776017   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.850072   37212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776417   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.813187   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:55:12.829884   37212 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.887424   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:55:12.887573   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:55:12.887589   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:55:12.887599   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.888232   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.901144   37212 node_ready.go:49] node "default-k8s-different-port-20220906154915-22187" has status "Ready":"True"
	I0906 15:55:12.901168   37212 node_ready.go:38] duration metric: took 13.7942ms waiting for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.901178   37212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:12.916307   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:12.938564   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.974260   37212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:12.974271   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:55:12.974329   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.976572   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.979654   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.045815   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.108896   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:13.121508   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:55:13.121527   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:55:13.131848   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:55:13.131866   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:55:13.209761   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:55:13.209774   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:55:13.223186   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:13.302916   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.302940   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:55:13.309224   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:55:13.309237   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:55:13.327162   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.395379   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:55:13.428686   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:55:13.522685   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:55:13.522699   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:55:13.626615   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:55:13.626632   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:55:13.721707   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.192261292s)
	I0906 15:55:13.721737   37212 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 15:55:13.794268   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:55:13.794285   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:55:13.920312   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:55:13.920326   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:55:14.005171   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:55:14.005188   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:55:14.022831   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.022846   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:55:14.105185   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.326598   37212 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:14.935413   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:15.141698   37212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 15:55:15.178672   37212 addons.go:414] enableAddons completed in 2.745959213s
	I0906 15:55:16.937654   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:17.935795   37212 pod_ready.go:92] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:17.935809   37212 pod_ready.go:81] duration metric: took 5.01946616s waiting for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:17.935816   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446882   37212 pod_ready.go:92] pod "coredns-565d847f94-q4mb7" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.446896   37212 pod_ready.go:81] duration metric: took 511.073117ms waiting for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446904   37212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451838   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.451848   37212 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451854   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457179   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.457189   37212 pod_ready.go:81] duration metric: took 5.329087ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457196   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461768   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.461778   37212 pod_ready.go:81] duration metric: took 4.575554ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461784   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733119   37212 pod_ready.go:92] pod "kube-proxy-tmfkn" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.733129   37212 pod_ready.go:81] duration metric: took 271.339141ms waiting for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733137   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132361   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:19.132371   37212 pod_ready.go:81] duration metric: took 399.227312ms waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132376   37212 pod_ready.go:38] duration metric: took 6.231173997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:19.132390   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:55:19.132442   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:55:19.143302   37212 api_server.go:71] duration metric: took 6.710591857s to wait for apiserver process to appear ...
	I0906 15:55:19.143315   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:55:19.143323   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:55:19.148529   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:55:19.149651   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:55:19.149659   37212 api_server.go:130] duration metric: took 6.340438ms to wait for apiserver health ...
	I0906 15:55:19.149665   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:55:19.338022   37212 system_pods.go:59] 9 kube-system pods found
	I0906 15:55:19.338037   37212 system_pods.go:61] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.338041   37212 system_pods.go:61] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.338045   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.338049   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.338053   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.338059   37212 system_pods.go:61] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.338064   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.338069   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.338078   37212 system_pods.go:61] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.338082   37212 system_pods.go:74] duration metric: took 188.413972ms to wait for pod list to return data ...
	I0906 15:55:19.338089   37212 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:55:19.532218   37212 default_sa.go:45] found service account: "default"
	I0906 15:55:19.532231   37212 default_sa.go:55] duration metric: took 194.136492ms for default service account to be created ...
	I0906 15:55:19.532236   37212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:55:19.735925   37212 system_pods.go:86] 9 kube-system pods found
	I0906 15:55:19.735939   37212 system_pods.go:89] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.735944   37212 system_pods.go:89] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.735947   37212 system_pods.go:89] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.735957   37212 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.735962   37212 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.735968   37212 system_pods.go:89] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.735972   37212 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.735977   37212 system_pods.go:89] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.735981   37212 system_pods.go:89] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.735986   37212 system_pods.go:126] duration metric: took 203.746511ms to wait for k8s-apps to be running ...
	I0906 15:55:19.735991   37212 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:55:19.736042   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:19.746224   37212 system_svc.go:56] duration metric: took 10.227063ms WaitForService to wait for kubelet.
	I0906 15:55:19.746239   37212 kubeadm.go:573] duration metric: took 7.313531095s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:55:19.746256   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:55:19.935919   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:55:19.935936   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:55:19.935944   37212 node_conditions.go:105] duration metric: took 189.682536ms to run NodePressure ...
	I0906 15:55:19.935956   37212 start.go:216] waiting for startup goroutines ...
	I0906 15:55:19.974175   37212 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:55:20.010226   37212 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220906154915-22187" cluster and "default" namespace by default
	I0906 15:55:31.779661   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:55:31.779822   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:55:31.779830   36618 kubeadm.go:317] 
	I0906 15:55:31.779860   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:55:31.779889   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:55:31.779894   36618 kubeadm.go:317] 
	I0906 15:55:31.779921   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:55:31.779960   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:55:31.780052   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:55:31.780063   36618 kubeadm.go:317] 
	I0906 15:55:31.780169   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:55:31.780219   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:55:31.780247   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:55:31.780251   36618 kubeadm.go:317] 
	I0906 15:55:31.780328   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:55:31.780416   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:55:31.780495   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:55:31.780559   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:55:31.780661   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:55:31.780715   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:55:31.783923   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:55:31.784047   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:55:31.784168   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:55:31.784249   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:55:31.784306   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:55:31.784333   36618 kubeadm.go:398] StartCluster complete in 7m59.255788376s
	I0906 15:55:31.784406   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:55:31.816119   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.816135   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:55:31.816207   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:55:31.852948   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.852961   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:55:31.853021   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:55:31.884845   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.884856   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:55:31.884911   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:55:31.917054   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.917068   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:55:31.917132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:55:31.948382   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.948395   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:55:31.948451   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:55:31.982328   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.982339   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:55:31.982387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:55:32.013438   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.013450   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:55:32.013510   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:55:32.044826   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.044840   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:55:32.044847   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:55:32.044854   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:55:32.085941   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:55:32.085955   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:55:32.097748   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:55:32.097762   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:55:32.160044   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:55:32.160054   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:55:32.160060   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:55:32.174249   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:55:32.174260   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:55:34.234529   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060250655s)
	W0906 15:55:34.234640   36618 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:55:34.234654   36618 out.go:239] * 
	W0906 15:55:34.234769   36618 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.234800   36618 out.go:239] * 
	W0906 15:55:34.235311   36618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:55:34.299125   36618 out.go:177] 
	W0906 15:55:34.342220   36618 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.342329   36618 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:55:34.342385   36618 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:55:34.385240   36618 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:50:24 UTC, end at Tue 2022-09-06 22:56:08 UTC. --
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.477144595Z" level=info msg="ignoring event" container=b6090df368cf1c943838a5eafd9bd19b4160afaaf5ca3f8d55fe9eb5fbcf8acc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.546874524Z" level=info msg="ignoring event" container=22ec863aee3edce39e57d343903764a71b234fd44cc171b35cc38d4ed869e2da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.625388056Z" level=info msg="ignoring event" container=98036b5abb352fe5b6352240ce92fd5bae0afc37395ff4a8e14c55bf7559b20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.733393880Z" level=info msg="ignoring event" container=5e602b66c3c23fbdc29b0027b56f7121de29864356dfab845fef0b7779e93bae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.800591872Z" level=info msg="ignoring event" container=4b235df16e6994fa3ef897cbf0a8e6d69de49878a76f92fe70b36ff2f00e56d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.907200741Z" level=info msg="ignoring event" container=0fc0a1ce243fbbb6e3fa81d97cd0c596b1d25cf1700320e502e789e9a0667785 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.974871558Z" level=info msg="ignoring event" container=ee9a9f6db46303cbe9530cce49f01329f4b49afa485c0db0f5351fe9f86346ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.044209645Z" level=info msg="ignoring event" container=2b8ccbe97df0dd741bc0c8e562761eba97dfe08b52008409521428b2e56a6879 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.107481448Z" level=info msg="ignoring event" container=3e74234684fdbe4487659498dc5474f5d575aeb84113397e81baff21f1ef0358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.188878823Z" level=info msg="ignoring event" container=d8a639143af94543d1a9cc7b19ec897f39a230396d154559b4675fd9177a7d59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.258227856Z" level=info msg="ignoring event" container=bd635a365f6142d60f9c92baff1a46c39d752a10fe1c612058c308092c5dcccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.377184294Z" level=info msg="ignoring event" container=f2205f191166bce5eb516411fa4cb06f95b1b0967ef4a6276cab71bd69551b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.416732067Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.417303083Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.418563655Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:16 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:16.148234964Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 22:55:20 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:20.407146682Z" level=info msg="ignoring event" container=31d7700143a426b8af1544bb1bf9357019b278e702a8a58bc28705eb91a6f642 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:20 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:20.553452925Z" level=info msg="ignoring event" container=45ec75e68f94aaf8ee9d8da70e4326ac5369d9e736d9499cad4d66fc8f8f6826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:21 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:21.458228531Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:55:21 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:21.627109374Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:55:24 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:24.918173262Z" level=info msg="ignoring event" container=ad3a30b30ec958e3d18c03553fa87e0baf6dcf5e03dc50c8dc9c78aa77b55f57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:25 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:25.611877865Z" level=info msg="ignoring event" container=07e1a936211443892c9b484fefae3f75027257e3b2f7b9de565e37fe397ae1de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.585401160Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.585443289Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.586966538Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	07e1a93621144       a90209bb39e3d                                                                                    43 seconds ago       Exited              dashboard-metrics-scraper   1                   851aacc203834
	e3c507c5d7b13       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   48 seconds ago       Running             kubernetes-dashboard        0                   5dcc0b092221f
	bf51ca1279c61       6e38f40d628db                                                                                    54 seconds ago       Running             storage-provisioner         0                   c86003c5a89a8
	930fcd99160b8       5185b96f0becf                                                                                    54 seconds ago       Running             coredns                     0                   e3482c66aa743
	d42a44c5bc034       58a9a0c6d96f2                                                                                    56 seconds ago       Running             kube-proxy                  0                   3fb6e6ae91df8
	2f3e869c727f9       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   851acb1dbeb1e
	e109e6fe94bd5       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   94e3e9b2b7bdf
	2087e58b9ebd8       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   567d18333cc46
	00d241b0787a8       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   817f79dd0b5b0
	
	* 
	* ==> coredns [930fcd99160b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220906154915-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220906154915-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220906154915-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_54_59_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:54:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220906154915-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:56:06 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220906154915-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                9e7bcc06-4367-4f4b-bc76-5523d39b1adc
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-6g7xm                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     56s
	  kube-system                 etcd-default-k8s-different-port-20220906154915-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220906154915-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220906154915-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-tmfkn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220906154915-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 metrics-server-5c8fd5cf8-2pdjw                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-xqs4c                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-q5gxc                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientPID     75s (x4 over 75s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  75s (x4 over 75s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    75s (x4 over 75s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                69s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeReady
	  Normal  NodeHasSufficientPID     69s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-different-port-20220906154915-22187 event: Registered Node default-k8s-different-port-20220906154915-22187 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeNotReady             2s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [2087e58b9ebd] <==
	* {"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220906154915-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:54:54.656Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:54:54.656Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:55:14.198Z","caller":"traceutil/trace.go:171","msg":"trace[1678117136] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"152.38418ms","start":"2022-09-06T22:55:14.046Z","end":"2022-09-06T22:55:14.198Z","steps":["trace[1678117136] 'process raft request'  (duration: 152.182575ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-06T22:55:14.198Z","caller":"traceutil/trace.go:171","msg":"trace[1387182222] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"153.319714ms","start":"2022-09-06T22:55:14.045Z","end":"2022-09-06T22:55:14.198Z","steps":["trace[1387182222] 'process raft request'  (duration: 78.965284ms)","trace[1387182222] 'compare'  (duration: 73.84346ms)"],"step_count":2}
	{"level":"warn","ts":"2022-09-06T22:55:20.177Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"210.321556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-09-06T22:55:20.177Z","caller":"traceutil/trace.go:171","msg":"trace[1498113217] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"210.417868ms","start":"2022-09-06T22:55:19.967Z","end":"2022-09-06T22:55:20.177Z","steps":["trace[1498113217] 'range keys from in-memory index tree'  (duration: 210.269251ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:56:09 up  1:12,  0 users,  load average: 1.46, 0.98, 1.01
	Linux default-k8s-different-port-20220906154915-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [00d241b0787a] <==
	* I0906 22:54:57.746199       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 22:54:57.746258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:54:58.030080       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:54:58.054973       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:54:58.181586       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 22:54:58.185411       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 22:54:58.186157       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:54:58.188942       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 22:54:58.830587       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:54:59.443182       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:54:59.448747       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 22:54:59.455473       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:54:59.536259       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:55:12.216141       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 22:55:12.568104       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:55:14.315009       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.247.14]
	W0906 22:55:15.044009       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:55:15.044045       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:55:15.044051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 22:55:15.044074       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:55:15.044103       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:55:15.044701       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.54.7]
	I0906 22:55:15.045059       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 22:55:15.101423       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.123.82]
	
	* 
	* ==> kube-controller-manager [e109e6fe94bd] <==
	* I0906 22:55:12.782977       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-q4mb7"
	I0906 22:55:12.800074       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-6g7xm"
	I0906 22:55:12.920760       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-q4mb7"
	I0906 22:55:14.042818       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:55:14.045382       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0906 22:55:14.199803       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0906 22:55:14.205170       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-2pdjw"
	I0906 22:55:14.982465       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 22:55:14.988147       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	I0906 22:55:14.989815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:14.992572       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:14.997128       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:14.997758       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:15.001180       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.001236       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:15.003651       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:15.003655       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:15.006151       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.006191       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:15.006173       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.006202       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:15.017438       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-q5gxc"
	I0906 22:55:15.031485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-xqs4c"
	E0906 22:56:05.696861       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 22:56:05.759356       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d42a44c5bc03] <==
	* I0906 22:55:13.324022       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:55:13.324104       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:55:13.324123       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:55:13.423105       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:55:13.423154       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:55:13.423161       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:55:13.423191       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:55:13.423227       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:55:13.423301       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:55:13.423440       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:55:13.423466       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:55:13.424767       1 config.go:317] "Starting service config controller"
	I0906 22:55:13.424784       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:55:13.424815       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:55:13.424821       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:55:13.424997       1 config.go:444] "Starting node config controller"
	I0906 22:55:13.425006       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:55:13.525532       1 shared_informer.go:262] Caches are synced for node config
	I0906 22:55:13.525586       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:55:13.525615       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [2f3e869c727f] <==
	* W0906 22:54:56.832394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:56.832454       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:56.832465       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 22:54:56.832478       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 22:54:56.832395       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:54:56.832693       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 22:54:56.832704       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:54:56.832715       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:54:56.832789       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 22:54:56.832849       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 22:54:56.832917       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 22:54:56.832975       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 22:54:57.649360       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:57.649528       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:57.680727       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:57.680902       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:57.730731       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 22:54:57.730772       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 22:54:57.829528       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:54:57.829565       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:54:57.899679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:54:57.899751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:54:57.947161       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:54:57.947249       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0906 22:54:59.728656       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:50:24 UTC, end at Tue 2022-09-06 22:56:10 UTC. --
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.161826   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7p9\" (UniqueName: \"kubernetes.io/projected/f06bbe71-2d02-408c-b415-734366bf4723-kube-api-access-cb7p9\") pod \"dashboard-metrics-scraper-7b94984548-xqs4c\" (UID: \"f06bbe71-2d02-408c-b415-734366bf4723\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-xqs4c"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.161889   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd12e82d-279c-477c-82a6-77663bdacc76-config-volume\") pod \"coredns-565d847f94-6g7xm\" (UID: \"cd12e82d-279c-477c-82a6-77663bdacc76\") " pod="kube-system/coredns-565d847f94-6g7xm"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.161983   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9364049-c8f3-468a-867e-50133dcc208b-xtables-lock\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162067   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6rz\" (UniqueName: \"kubernetes.io/projected/c9364049-c8f3-468a-867e-50133dcc208b-kube-api-access-5s6rz\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162102   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb5g\" (UniqueName: \"kubernetes.io/projected/b88a6579-9359-435f-8fb4-b7ec5c7f7d52-kube-api-access-hzb5g\") pod \"metrics-server-5c8fd5cf8-2pdjw\" (UID: \"b88a6579-9359-435f-8fb4-b7ec5c7f7d52\") " pod="kube-system/metrics-server-5c8fd5cf8-2pdjw"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162137   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pgf8\" (UniqueName: \"kubernetes.io/projected/cd12e82d-279c-477c-82a6-77663bdacc76-kube-api-access-5pgf8\") pod \"coredns-565d847f94-6g7xm\" (UID: \"cd12e82d-279c-477c-82a6-77663bdacc76\") " pod="kube-system/coredns-565d847f94-6g7xm"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162155   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b88a6579-9359-435f-8fb4-b7ec5c7f7d52-tmp-dir\") pod \"metrics-server-5c8fd5cf8-2pdjw\" (UID: \"b88a6579-9359-435f-8fb4-b7ec5c7f7d52\") " pod="kube-system/metrics-server-5c8fd5cf8-2pdjw"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162209   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9364049-c8f3-468a-867e-50133dcc208b-kube-proxy\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162291   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8bs\" (UniqueName: \"kubernetes.io/projected/da22f144-e345-4b66-b770-500d22a98dfc-kube-api-access-ht8bs\") pod \"storage-provisioner\" (UID: \"da22f144-e345-4b66-b770-500d22a98dfc\") " pod="kube-system/storage-provisioner"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162332   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f06bbe71-2d02-408c-b415-734366bf4723-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-xqs4c\" (UID: \"f06bbe71-2d02-408c-b415-734366bf4723\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-xqs4c"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162348   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9364049-c8f3-468a-867e-50133dcc208b-lib-modules\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162364   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5c52cb40-b9c3-4910-87ba-7c97614ca12e-tmp-volume\") pod \"kubernetes-dashboard-54596f475f-q5gxc\" (UID: \"5c52cb40-b9c3-4910-87ba-7c97614ca12e\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-q5gxc"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162456   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8grc\" (UniqueName: \"kubernetes.io/projected/5c52cb40-b9c3-4910-87ba-7c97614ca12e-kube-api-access-d8grc\") pod \"kubernetes-dashboard-54596f475f-q5gxc\" (UID: \"5c52cb40-b9c3-4910-87ba-7c97614ca12e\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-q5gxc"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162517   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da22f144-e345-4b66-b770-500d22a98dfc-tmp\") pod \"storage-provisioner\" (UID: \"da22f144-e345-4b66-b770-500d22a98dfc\") " pod="kube-system/storage-provisioner"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162535   10978 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:08.342232   10978 request.go:601] Waited for 1.122758878s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.434183   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.576563   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.799715   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.966375   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:09.246710   10978 scope.go:115] "RemoveContainer" containerID="07e1a936211443892c9b484fefae3f75027257e3b2f7b9de565e37fe397ae1de"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869470   10978 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869529   10978 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869634   10978 kuberuntime_manager.go:862] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hzb5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c8fd5cf8-2pdjw_kube-system(b88a6579-9359-435f-8fb4-b7ec5c7f7d52): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869661   10978 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c8fd5cf8-2pdjw" podUID=b88a6579-9359-435f-8fb4-b7ec5c7f7d52
	
	* 
	* ==> kubernetes-dashboard [e3c507c5d7b1] <==
	* 2022/09/06 22:55:21 Using namespace: kubernetes-dashboard
	2022/09/06 22:55:21 Using in-cluster config to connect to apiserver
	2022/09/06 22:55:21 Using secret token for csrf signing
	2022/09/06 22:55:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 22:55:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 22:55:21 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 22:55:21 Generating JWE encryption key
	2022/09/06 22:55:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 22:55:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 22:55:21 Initializing JWE encryption key from synchronized object
	2022/09/06 22:55:21 Creating in-cluster Sidecar client
	2022/09/06 22:55:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:55:21 Serving insecurely on HTTP port: 9090
	2022/09/06 22:56:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:55:21 Starting overwatch
	
	* 
	* ==> storage-provisioner [bf51ca1279c6] <==
	* I0906 22:55:14.818131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:55:14.826526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:55:14.826708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:55:14.834051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:55:14.834099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ea33574-9df4-4cec-a23e-b315ede47166", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf became leader
	I0906 22:55:14.834205       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf!
	I0906 22:55:14.935344       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-2pdjw
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw: exit status 1 (56.023177ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-2pdjw" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220906154915-22187
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220906154915-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68",
	        "Created": "2022-09-06T22:49:21.916989006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270152,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:50:24.656526964Z",
	            "FinishedAt": "2022-09-06T22:50:22.661548267Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/hosts",
	        "LogPath": "/var/lib/docker/containers/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68/3d86188a560b5faff72b07178911473490d31349b1eddfe068836b9b4e5d1e68-json.log",
	        "Name": "/default-k8s-different-port-20220906154915-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220906154915-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220906154915-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81391995e8f7fc45b9c9a01ab9037f53766b991043ca9b7dfd6a1abcac58ce48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220906154915-22187",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220906154915-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220906154915-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220906154915-22187",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220906154915-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "205be613d03681993fd693cb3f7845a436f1438eec75dbf09596e296a882a445",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59715"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59716"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59717"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59718"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59719"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/205be613d036",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220906154915-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3d86188a560b",
	                        "default-k8s-different-port-20220906154915-22187"
	                    ],
	                    "NetworkID": "b1d553093d415a7df6a4f6e69bc112c62a388e7ca5dec486e6bc316fc8b58dbb",
	                    "EndpointID": "405256d7cd2f9d9b492033b06aae637bd8522b00cfa2994816b8bd88c9e407f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220906154915-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220906154915-22187 logs -n 25: (2.689961453s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-20220906152523-22187                    | cilium-20220906152523-22187                     | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220906152522-22187                    | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:41 PDT |
	|         | kubenet-20220906152522-22187                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:41 PDT | 06 Sep 22 15:42 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:43 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:43 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:45 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:50:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:50:23.383928   37212 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:50:23.384105   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384110   37212 out.go:309] Setting ErrFile to fd 2...
	I0906 15:50:23.384114   37212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:50:23.384226   37212 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:50:23.384693   37212 out.go:303] Setting JSON to false
	I0906 15:50:23.400568   37212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10194,"bootTime":1662494429,"procs":338,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:50:23.400663   37212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:50:23.422701   37212 out.go:177] * [default-k8s-different-port-20220906154915-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:50:23.444975   37212 notify.go:193] Checking for updates...
	I0906 15:50:23.466707   37212 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:50:23.488647   37212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:23.509671   37212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:50:23.530748   37212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:50:23.552752   37212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:50:23.575417   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:23.576052   37212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:50:23.643500   37212 docker.go:137] docker version: linux-20.10.17
	I0906 15:50:23.643647   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.772962   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.713734774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.816484   37212 out.go:177] * Using the docker driver based on existing profile
	I0906 15:50:23.837697   37212 start.go:284] selected driver: docker
	I0906 15:50:23.837744   37212 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port
-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:23.837918   37212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:50:23.841270   37212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:50:23.972563   37212 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:50:23.911532634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:50:23.972720   37212 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:50:23.972740   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:23.972752   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:23.972759   37212 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:24.016175   37212 out.go:177] * Starting control plane node default-k8s-different-port-20220906154915-22187 in cluster default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.037370   37212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:50:24.058389   37212 out.go:177] * Pulling base image ...
	I0906 15:50:24.100618   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:24.100693   37212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:50:24.100700   37212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:50:24.100731   37212 cache.go:57] Caching tarball of preloaded images
	I0906 15:50:24.100971   37212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:50:24.100991   37212 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:50:24.102052   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.177644   37212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:50:24.177679   37212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:50:24.177695   37212 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:50:24.177751   37212 start.go:364] acquiring machines lock for default-k8s-different-port-20220906154915-22187: {Name:mke86da387e8e60d201d2bf660ca2b291cded1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:50:24.177833   37212 start.go:368] acquired machines lock for "default-k8s-different-port-20220906154915-22187" in 64.558µs
	I0906 15:50:24.177857   37212 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:50:24.177868   37212 fix.go:55] fixHost starting: 
	I0906 15:50:24.178075   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.241080   37212 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220906154915-22187: state=Stopped err=<nil>
	W0906 15:50:24.241106   37212 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:50:24.289728   37212 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220906154915-22187" ...
	I0906 15:50:24.310938   37212 cli_runner.go:164] Run: docker start default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.652464   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:50:24.717004   37212 kic.go:415] container "default-k8s-different-port-20220906154915-22187" state is running.
	I0906 15:50:24.717609   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.788739   37212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/config.json ...
	I0906 15:50:24.789155   37212 machine.go:88] provisioning docker machine ...
	I0906 15:50:24.789182   37212 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220906154915-22187"
	I0906 15:50:24.789253   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:24.857628   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:24.857848   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:24.857870   37212 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220906154915-22187 && echo "default-k8s-different-port-20220906154915-22187" | sudo tee /etc/hostname
	I0906 15:50:24.982000   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220906154915-22187
	
	I0906 15:50:24.982089   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.047360   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.047575   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.047593   37212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220906154915-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220906154915-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220906154915-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:50:25.159181   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:25.159203   37212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:50:25.159232   37212 ubuntu.go:177] setting up certificates
	I0906 15:50:25.159243   37212 provision.go:83] configureAuth start
	I0906 15:50:25.159305   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.227062   37212 provision.go:138] copyHostCerts
	I0906 15:50:25.227183   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:50:25.227193   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:50:25.227287   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:50:25.227513   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:50:25.227523   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:50:25.227599   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:50:25.227736   37212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:50:25.227742   37212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:50:25.227797   37212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:50:25.227954   37212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220906154915-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220906154915-22187]
	I0906 15:50:25.387707   37212 provision.go:172] copyRemoteCerts
	I0906 15:50:25.387773   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:50:25.387820   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.453896   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:25.538722   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:50:25.559997   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0906 15:50:25.578754   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:50:25.599062   37212 provision.go:86] duration metric: configureAuth took 439.804217ms
	I0906 15:50:25.599076   37212 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:50:25.599255   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:50:25.599313   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.664450   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.664592   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.664602   37212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:50:25.777980   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:50:25.777993   37212 ubuntu.go:71] root file system type: overlay
	I0906 15:50:25.778137   37212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:50:25.778210   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:25.842319   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:25.842469   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:25.842532   37212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:50:25.964564   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:50:25.964654   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.028806   37212 main.go:134] libmachine: Using SSH client type: native
	I0906 15:50:26.028945   37212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59715 <nil> <nil>}
	I0906 15:50:26.028959   37212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:50:26.145650   37212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:50:26.145668   37212 machine.go:91] provisioned docker machine in 1.356498564s
	I0906 15:50:26.145678   37212 start.go:300] post-start starting for "default-k8s-different-port-20220906154915-22187" (driver="docker")
	I0906 15:50:26.145685   37212 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:50:26.145738   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:50:26.145781   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.214583   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.297685   37212 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:50:26.301530   37212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:50:26.301546   37212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:50:26.301553   37212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:50:26.301557   37212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:50:26.301567   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:50:26.301695   37212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:50:26.301841   37212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:50:26.301982   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:50:26.309414   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:26.326489   37212 start.go:303] post-start completed in 180.79968ms
	I0906 15:50:26.326571   37212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:50:26.326625   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.391005   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.472459   37212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:50:26.476963   37212 fix.go:57] fixHost completed within 2.299088562s
	I0906 15:50:26.476980   37212 start.go:83] releasing machines lock for "default-k8s-different-port-20220906154915-22187", held for 2.299131722s
	I0906 15:50:26.477075   37212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543830   37212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:50:26.543849   37212 ssh_runner.go:195] Run: systemctl --version
	I0906 15:50:26.543919   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.543933   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:26.610348   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.610521   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:50:26.738898   37212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:50:26.748821   37212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:50:26.748877   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:50:26.760220   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:50:26.772960   37212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:50:26.840012   37212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:50:26.910847   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:26.983057   37212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:50:27.222145   37212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:50:27.292399   37212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:50:27.361398   37212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:50:27.370829   37212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:50:27.370897   37212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:50:27.374774   37212 start.go:471] Will wait 60s for crictl version
	I0906 15:50:27.374820   37212 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:50:27.478851   37212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:50:27.478919   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:27.513172   37212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:50:22.741213   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.747765   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:22.747822   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:22.780713   36618 logs.go:274] 0 containers: []
	W0906 15:50:22.780735   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:22.780743   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:22.780750   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:22.825622   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:22.825640   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:22.849234   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:22.849253   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:22.904336   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:22.904346   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:22.904353   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:22.917771   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:22.917784   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:24.972764   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054937206s)
	I0906 15:50:27.473071   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:27.516271   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:27.546171   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.546183   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:27.546241   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:27.576500   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.576511   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:27.576565   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:27.605881   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.605898   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:27.605968   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:27.634722   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.634737   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:27.634806   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:27.682458   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.682471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:27.682562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:27.715777   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.715790   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:27.715848   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:27.573367   37212 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:50:27.573443   37212 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220906154915-22187 dig +short host.docker.internal
	I0906 15:50:27.702910   37212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:50:27.703141   37212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:50:27.707491   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.718288   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:27.784455   37212 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:50:27.784543   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.816064   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.816080   37212 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:50:27.816149   37212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:50:27.847540   37212 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:50:27.847561   37212 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:50:27.847634   37212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:50:27.921264   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:27.921277   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:27.921293   37212 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:50:27.921305   37212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220906154915-22187 NodeName:default-k8s-different-port-20220906154915-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:50:27.921421   37212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220906154915-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:50:27.921503   37212 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220906154915-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0906 15:50:27.921560   37212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:50:27.928695   37212 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:50:27.928754   37212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:50:27.935705   37212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0906 15:50:27.947621   37212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:50:27.959675   37212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0906 15:50:27.971770   37212 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:50:27.975353   37212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:50:27.984747   37212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187 for IP: 192.168.76.2
	I0906 15:50:27.984863   37212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:50:27.984928   37212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:50:27.985007   37212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.key
	I0906 15:50:27.985064   37212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key.31bdca25
	I0906 15:50:27.985114   37212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key
	I0906 15:50:27.985323   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:50:27.985358   37212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:50:27.985366   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:50:27.985406   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:50:27.985436   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:50:27.985463   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:50:27.985530   37212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:50:27.986135   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:50:28.002943   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 15:50:28.019502   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:50:28.036140   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 15:50:28.052467   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:50:28.068669   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:50:28.085037   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:50:28.101413   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:50:28.117752   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:50:28.134563   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:50:28.151206   37212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:50:28.167822   37212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:50:28.179908   37212 ssh_runner.go:195] Run: openssl version
	I0906 15:50:28.185084   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:50:28.192667   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196560   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.196608   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:50:28.201652   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:50:28.208974   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:50:28.216562   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220441   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.220490   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:50:28.225402   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:50:28.232504   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:50:28.240088   37212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243702   37212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.243751   37212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:50:28.248732   37212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:50:28.255841   37212 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220906154915-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:default-k8s-different-port-20220906154915-2218
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:50:28.255949   37212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:28.284221   37212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:50:28.291767   37212 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:50:28.291783   37212 kubeadm.go:627] restartCluster start
	I0906 15:50:28.291828   37212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:50:28.298403   37212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.298458   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:50:28.362342   37212 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220906154915-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:50:28.362504   37212 kubeconfig.go:127] "default-k8s-different-port-20220906154915-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:50:28.362854   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:50:28.364281   37212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:50:28.371727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.371785   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.380211   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:27.747228   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.747241   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:27.747297   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:27.779174   36618 logs.go:274] 0 containers: []
	W0906 15:50:27.779190   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:27.779197   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:27.779206   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:27.794916   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:27.794934   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:29.852358   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057404132s)
	I0906 15:50:29.852500   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:29.852510   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:29.890521   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:29.890535   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:29.901840   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:29.901851   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:29.954554   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.455578   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.518172   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:32.548482   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.548495   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:32.548562   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:32.581388   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.581401   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:32.581462   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:32.613423   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.613440   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:32.613516   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:32.646792   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.646806   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:32.646886   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:32.679058   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.679070   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:32.679132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:32.706281   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.706294   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:32.706349   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:28.580493   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.580582   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.588946   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.782354   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.782515   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.792901   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:28.980326   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:28.980414   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:28.990348   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.180465   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.180555   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.190991   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.380727   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.380854   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.391256   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.582341   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.582484   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.592874   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.782426   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.782560   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.792358   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:29.980694   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:29.980808   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:29.991278   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.180949   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.181077   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.190362   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.380565   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.380676   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.390714   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.581547   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.581695   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.591408   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.781668   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.781744   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.792474   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:30.982446   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:30.982554   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:30.992872   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.182373   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.182496   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.193523   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.382353   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.382500   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.392561   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.392570   37212 api_server.go:165] Checking apiserver status ...
	I0906 15:50:31.392611   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:50:31.400629   37212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.400643   37212 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:50:31.400653   37212 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:50:31.400714   37212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:50:31.431389   37212 docker.go:443] Stopping containers: [445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2]
	I0906 15:50:31.431462   37212 ssh_runner.go:195] Run: docker stop 445628b97660 10e168fb5a74 2d246fe6f58a cfa28b4cdb2d e034eba74ac8 bf2a1afd23f7 be82f8452929 127aa7aa3d93 5dd7d8a472ca 9a5362ed7e65 c5cab96a6b6c eb0c740ea4ae b7c21e681624 dc41a5b71413 cd8d53e3fe24 005830c8f8c2
	I0906 15:50:31.460862   37212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:50:31.471093   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:50:31.478456   37212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 22:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep  6 22:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:49 /etc/kubernetes/scheduler.conf
	
	I0906 15:50:31.478500   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 15:50:31.485784   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 15:50:31.493288   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.500416   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.500477   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:50:31.507449   37212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 15:50:31.515558   37212 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:50:31.515611   37212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:50:31.523180   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530863   37212 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:50:31.530878   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:31.576875   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.388889   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.520033   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.572876   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:32.645195   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:50:32.645266   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.159857   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:32.740556   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.745575   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:32.745632   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:32.775009   36618 logs.go:274] 0 containers: []
	W0906 15:50:32.775021   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:32.775028   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:32.775035   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:32.815094   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:32.815109   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:32.827508   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:32.827521   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:32.892093   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:32.892116   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:32.892127   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:32.905761   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:32.905772   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:34.959908   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054119058s)
	I0906 15:50:37.461003   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:37.516871   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:37.552075   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.552087   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:37.552148   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:37.588429   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.588444   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:37.588519   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:37.621349   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.621361   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:37.621443   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:37.653420   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.653435   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:37.653497   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:37.684456   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.684471   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:37.684530   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:37.723554   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.723570   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:37.723702   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:33.657999   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:33.704057   37212 api_server.go:71] duration metric: took 1.058865284s to wait for apiserver process to appear ...
	I0906 15:50:33.704096   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:50:33.704112   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:33.705208   37212 api_server.go:256] stopped: https://127.0.0.1:59719/healthz: Get "https://127.0.0.1:59719/healthz": EOF
	I0906 15:50:34.205313   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.341332   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:50:36.341358   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:50:36.705764   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:36.711926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:36.711938   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.205327   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.211521   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:50:37.211533   37212 api_server.go:102] status: https://127.0.0.1:59719/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:50:37.705372   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:50:37.710926   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:50:37.717670   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:50:37.717683   37212 api_server.go:130] duration metric: took 4.013570504s to wait for apiserver health ...
	I0906 15:50:37.717690   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:50:37.717696   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:50:37.717709   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:50:37.726280   37212 system_pods.go:59] 8 kube-system pods found
	I0906 15:50:37.726299   37212 system_pods.go:61] "coredns-565d847f94-wkvwz" [31b21348-6685-429e-8101-a138d6f44c5a] Running
	I0906 15:50:37.726311   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [06c9eba4-2eb0-4b4a-8923-14badd5235b3] Running
	I0906 15:50:37.726324   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [81942a28-8b69-4b86-80be-4c3d54e8c71e] Running
	I0906 15:50:37.726333   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [c814ed45-a563-4476-adc1-e14de96156f8] Running
	I0906 15:50:37.726343   37212 system_pods.go:61] "kube-proxy-t7vx8" [019bd2fb-a0da-477f-9df3-74757d6d787d] Running
	I0906 15:50:37.726356   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [9434ace8-3845-48cc-8fff-67183116a1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:50:37.726364   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-wnhzc" [23e9d7cc-1aca-4e2e-8ea9-ba6493231ca0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:50:37.726368   37212 system_pods.go:61] "storage-provisioner" [54518a3e-e36f-4f53-b169-0a62c4eabd66] Running
	I0906 15:50:37.726372   37212 system_pods.go:74] duration metric: took 8.658942ms to wait for pod list to return data ...
	I0906 15:50:37.726378   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:50:37.729378   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:50:37.729393   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:50:37.729406   37212 node_conditions.go:105] duration metric: took 3.024346ms to run NodePressure ...
	I0906 15:50:37.729419   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:50:37.929739   37212 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937679   37212 kubeadm.go:778] kubelet initialised
	I0906 15:50:37.937700   37212 kubeadm.go:779] duration metric: took 7.945238ms waiting for restarted kubelet to initialise ...
	I0906 15:50:37.937713   37212 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:50:37.946600   37212 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953168   37212 pod_ready.go:92] pod "coredns-565d847f94-wkvwz" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.953178   37212 pod_ready.go:81] duration metric: took 6.561071ms waiting for pod "coredns-565d847f94-wkvwz" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.953187   37212 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996891   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:37.996900   37212 pod_ready.go:81] duration metric: took 43.709214ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.996907   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002735   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.002745   37212 pod_ready.go:81] duration metric: took 5.833437ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.002752   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120788   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.120798   37212 pod_ready.go:81] duration metric: took 118.040762ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.120805   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:37.763280   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.763293   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:37.763360   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:37.800010   36618 logs.go:274] 0 containers: []
	W0906 15:50:37.800025   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:37.800033   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:37.800042   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:37.848311   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:37.848332   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:37.863600   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:37.863623   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:37.940260   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:37.940278   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:37.940317   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:37.957971   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:37.957982   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:40.011474   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053472357s)
	I0906 15:50:42.513826   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:38.521134   37212 pod_ready.go:92] pod "kube-proxy-t7vx8" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:38.521144   37212 pod_ready.go:81] duration metric: took 400.332006ms waiting for pod "kube-proxy-t7vx8" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:38.521150   37212 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:40.932176   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:43.018269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:43.046980   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.046992   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:43.047050   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:43.075170   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.075183   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:43.075237   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:43.104514   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.104526   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:43.104582   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:43.133882   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.133894   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:43.133953   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:43.162356   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.162368   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:43.162431   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:43.197634   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.197648   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:43.197714   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:43.229904   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.229916   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:43.229973   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:43.261120   36618 logs.go:274] 0 containers: []
	W0906 15:50:43.261132   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:43.261140   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:43.261146   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:43.300082   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:43.300097   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:43.312225   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:43.312238   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:43.365232   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:43.365242   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:43.365249   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:43.380452   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:43.380465   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:45.435023   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054541052s)
	I0906 15:50:43.431899   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:45.931027   37212 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:47.430333   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:50:47.430345   37212 pod_ready.go:81] duration metric: took 8.909165101s waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.430351   37212 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	I0906 15:50:47.936850   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:48.016371   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:48.047334   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.047346   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:48.047400   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:48.079442   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.079453   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:48.079507   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:48.107817   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.107829   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:48.107887   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:48.136570   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.136583   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:48.136641   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:48.165367   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.165380   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:48.165438   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:48.193686   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.193699   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:48.193758   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:48.222001   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.222015   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:48.222073   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:48.249978   36618 logs.go:274] 0 containers: []
	W0906 15:50:48.249990   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:48.249998   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:48.250005   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:48.287143   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:48.287158   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:48.298409   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:48.298422   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:48.356790   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:48.356801   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:48.356815   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:48.370256   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:48.370268   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:50.421619   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051333533s)
	I0906 15:50:49.443659   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:51.942260   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:52.922613   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:53.016799   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:53.048909   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.048921   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:53.048980   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:53.077529   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.077542   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:53.077606   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:53.105518   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.105529   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:53.105586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:53.135007   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.135020   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:53.135079   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:53.163328   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.163341   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:53.163396   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:53.191132   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.191143   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:53.191199   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:53.219655   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.219668   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:53.219724   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:53.248534   36618 logs.go:274] 0 containers: []
	W0906 15:50:53.248547   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:53.248554   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:50:53.248561   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:53.260251   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:53.260264   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:53.317573   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:53.317586   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:53.317592   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:53.332188   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:53.332202   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:50:55.385124   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052904546s)
	I0906 15:50:55.385230   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:50:55.385237   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:50:53.942333   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:55.942494   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:50:57.926420   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:50:58.017776   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:50:58.047321   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.047333   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:50:58.047397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:50:58.075870   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.075882   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:50:58.075939   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:50:58.106804   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.106816   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:50:58.106874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:50:58.136263   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.136276   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:50:58.136333   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:50:58.165517   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.165529   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:50:58.165586   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:50:58.194182   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.194194   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:50:58.194249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:50:58.222862   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.222874   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:50:58.222942   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:50:58.254161   36618 logs.go:274] 0 containers: []
	W0906 15:50:58.254174   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:50:58.254181   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:50:58.254192   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:50:58.307613   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:50:58.307626   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:50:58.307633   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:50:58.321788   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:50:58.321800   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:00.373491   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051674038s)
	I0906 15:51:00.373598   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:00.373605   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:00.412768   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:00.412783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:50:58.442534   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:00.942919   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:02.926085   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:03.016795   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:03.045519   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.045535   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:03.045594   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:03.077002   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.077014   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:03.077070   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:03.106731   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.106742   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:03.106803   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:03.137065   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.137078   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:03.137139   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:03.165960   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.165972   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:03.166031   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:03.194538   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.194552   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:03.194615   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:03.223613   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.223625   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:03.223692   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:03.252621   36618 logs.go:274] 0 containers: []
	W0906 15:51:03.252634   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:03.252642   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:03.252649   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:03.293046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:03.293061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:03.305992   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:03.306004   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:03.359768   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:03.359777   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:03.359783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:03.374067   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:03.374080   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:05.428493   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054394661s)
	I0906 15:51:03.440923   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:05.940922   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:07.930843   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:08.018364   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:08.050342   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.050356   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:08.050414   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:08.080802   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.080815   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:08.080874   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:08.110557   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.110570   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:08.110626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:08.140588   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.140601   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:08.140658   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:08.171464   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.171477   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:08.171544   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:08.200615   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.200628   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:08.200684   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:08.231364   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.231376   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:08.231442   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:08.265358   36618 logs.go:274] 0 containers: []
	W0906 15:51:08.265372   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:08.265379   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:08.265386   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:08.279229   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:08.279242   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:10.332629   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053369757s)
	I0906 15:51:10.332737   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:10.332744   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:10.371046   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:10.371061   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:10.382429   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:10.382441   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:10.434114   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:08.442971   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:10.943493   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:12.935172   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:13.016810   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:13.048233   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.048247   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:13.048307   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:13.076100   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.076112   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:13.076167   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:13.105312   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.105329   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:13.105397   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:13.134422   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.134434   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:13.134509   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:13.163088   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.163100   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:13.163156   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:13.192169   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.192181   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:13.192249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:13.221272   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.221284   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:13.221342   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:13.249896   36618 logs.go:274] 0 containers: []
	W0906 15:51:13.249907   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:13.249914   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:13.249921   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:13.261316   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:13.261328   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:13.316693   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:13.316704   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:13.316710   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:13.333605   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:13.333618   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:15.389543   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05590645s)
	I0906 15:51:15.389649   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:15.389657   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:13.441127   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:15.442305   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.940913   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:17.929544   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:18.017317   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:18.049613   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.049625   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:18.049682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:18.078124   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.078137   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:18.078194   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:18.106846   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.106859   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:18.106916   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:18.136908   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.136920   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:18.136977   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:18.165211   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.165223   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:18.165281   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:18.194317   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.194329   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:18.194387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:18.225530   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.225543   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:18.225602   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:18.254758   36618 logs.go:274] 0 containers: []
	W0906 15:51:18.254770   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:18.254777   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:18.254783   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:18.296280   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:18.296292   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:18.307948   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:18.307960   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:18.361906   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:18.361916   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:18.361922   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:18.376020   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:18.376033   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:20.430813   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054762389s)
	I0906 15:51:19.942784   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.441622   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:22.931094   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:23.016599   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:23.047383   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.047395   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:23.047452   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:23.076558   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.076570   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:23.076629   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:23.105158   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.105174   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:23.105249   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:23.134903   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.134915   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:23.134970   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:23.163722   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.163737   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:23.163797   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:23.193082   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.193103   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:23.193179   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:23.223206   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.223218   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:23.223279   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:23.253242   36618 logs.go:274] 0 containers: []
	W0906 15:51:23.253254   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:23.253264   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:23.253273   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:23.269441   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:23.269454   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:25.324087   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054614433s)
	I0906 15:51:25.324197   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:25.324204   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:25.362495   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:25.362508   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:25.373850   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:25.373864   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:25.427416   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:24.443600   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:26.943789   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:27.927755   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:28.018461   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:28.049083   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.049096   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:28.049151   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:28.076915   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.076926   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:28.076984   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:28.105609   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.105624   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:28.105682   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:28.135415   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.135427   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:28.135483   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:28.165044   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.165057   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:28.165117   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:28.194961   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.194972   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:28.195027   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:28.224560   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.224572   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:28.224626   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:28.253940   36618 logs.go:274] 0 containers: []
	W0906 15:51:28.253953   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:28.253961   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:28.253970   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:28.293324   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:28.293338   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:28.304502   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:28.304515   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:28.358820   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:28.358831   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:28.358838   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:28.372433   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:28.372444   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:30.425146   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052684469s)
	I0906 15:51:29.442830   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:31.940449   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:32.927175   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:33.017341   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:51:33.048887   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.048900   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:51:33.048957   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:51:33.077441   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.077452   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:51:33.077514   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:51:33.106906   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.106919   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:51:33.106981   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:51:33.136315   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.136327   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:51:33.136384   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:51:33.164846   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.164859   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:51:33.164920   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:51:33.210609   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.210620   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:51:33.210680   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:51:33.242201   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.242213   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:51:33.242269   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:51:33.270214   36618 logs.go:274] 0 containers: []
	W0906 15:51:33.270226   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:51:33.270233   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:51:33.270240   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:51:33.310549   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:51:33.310565   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:51:33.322387   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:51:33.322400   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:51:33.374793   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:51:33.374804   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:51:33.374812   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:51:33.388065   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:51:33.388077   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:51:35.437468   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04937256s)
	I0906 15:51:33.941085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:36.442094   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:37.937790   36618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:51:37.948009   36618 kubeadm.go:631] restartCluster took 4m5.383312357s
	W0906 15:51:37.948093   36618 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0906 15:51:37.948113   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:51:38.373075   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:51:38.382614   36618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:51:38.390078   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:51:38.390124   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:51:38.397462   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:51:38.397491   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:51:38.444468   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:51:38.444514   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:51:38.751851   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:51:38.751951   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:51:38.752044   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:51:39.022935   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:51:39.023421   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:51:39.030200   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:51:39.096240   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:51:39.120068   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:51:39.120143   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:51:39.120223   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:51:39.120334   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:51:39.120397   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:51:39.120462   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:51:39.120529   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:51:39.120590   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:51:39.120645   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:51:39.120727   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:51:39.120792   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:51:39.120833   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:51:39.120892   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:51:39.515774   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:51:39.628999   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:51:39.816570   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:51:39.960203   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:51:39.960886   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:51:40.003202   36618 out.go:204]   - Booting up control plane ...
	I0906 15:51:40.003301   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:51:40.003379   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:51:40.003447   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:51:40.003511   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:51:40.003627   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:51:38.941689   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:41.443572   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:43.941795   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:46.441286   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:48.941320   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:51.442966   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:53.940073   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:55.940873   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:51:57.943480   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:00.441774   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:02.940658   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:04.940941   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:06.943633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:09.443762   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:11.940301   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:13.941452   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:15.941955   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:19.941067   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:52:19.941616   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:19.941780   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:18.443378   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:20.941205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:22.944080   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:24.939499   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:24.939741   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:25.440548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:27.441072   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:29.940396   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:31.942049   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:34.933630   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:34.933937   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:33.942419   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:36.444518   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:38.941160   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:40.941401   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:42.942085   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:45.442441   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:47.940847   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:49.943953   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:52.441492   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:54.920474   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:52:54.920618   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:52:54.940040   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:56.941544   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:52:58.943275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:01.440638   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:03.441633   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:05.940226   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:07.941507   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:10.440810   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:12.440867   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:14.441996   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:16.943539   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:19.441181   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:21.443341   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:23.443498   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:25.942678   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:27.943717   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:30.442290   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:32.941144   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:34.893294   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:53:34.893561   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:53:34.893577   36618 kubeadm.go:317] 
	I0906 15:53:34.893622   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:53:34.893683   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:53:34.893694   36618 kubeadm.go:317] 
	I0906 15:53:34.893731   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:53:34.893787   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:53:34.893917   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:53:34.893925   36618 kubeadm.go:317] 
	I0906 15:53:34.894045   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:53:34.894099   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:53:34.894131   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:53:34.894142   36618 kubeadm.go:317] 
	I0906 15:53:34.894228   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:53:34.894312   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:53:34.894377   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:53:34.894411   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:53:34.894474   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:53:34.894503   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:53:34.897717   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:53:34.897844   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:53:34.897942   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:53:34.898018   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:53:34.898086   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0906 15:53:34.898216   36618 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 15:53:34.898243   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0906 15:53:35.322770   36618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:53:35.332350   36618 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:53:35.332397   36618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:53:35.340038   36618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:53:35.340060   36618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:53:35.385462   36618 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0906 15:53:35.385503   36618 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:53:35.695132   36618 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:53:35.695219   36618 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:53:35.695302   36618 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:53:35.979308   36618 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:53:35.979962   36618 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:53:35.986584   36618 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0906 15:53:36.049897   36618 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:53:36.071432   36618 out.go:204]   - Generating certificates and keys ...
	I0906 15:53:36.071511   36618 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:53:36.071599   36618 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:53:36.071705   36618 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:53:36.071754   36618 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:53:36.071836   36618 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:53:36.071932   36618 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:53:36.072028   36618 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:53:36.072072   36618 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:53:36.072132   36618 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:53:36.072207   36618 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:53:36.072239   36618 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:53:36.072293   36618 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:53:36.386098   36618 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:53:36.481839   36618 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:53:36.735962   36618 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:53:36.848356   36618 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:53:36.849031   36618 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:53:36.870925   36618 out.go:204]   - Booting up control plane ...
	I0906 15:53:36.871084   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:53:36.871201   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:53:36.871311   36618 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:53:36.871457   36618 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:53:36.871744   36618 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:53:35.440714   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:37.441318   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:39.441654   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:41.442159   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:43.940095   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:45.940829   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:47.941618   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:50.441918   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:52.940878   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:54.943528   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:56.943592   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:53:59.442374   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:01.443183   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:03.944275   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:06.442342   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:08.942198   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:11.442663   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:16.829056   36618 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0906 15:54:16.829917   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:16.830124   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:13.444236   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:15.941133   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:17.942335   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:21.827690   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:21.827848   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:20.442403   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:22.941548   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:24.942579   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:27.441632   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.820981   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:31.821186   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:29.444387   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:31.942340   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:34.441535   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:36.442205   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:38.943078   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:41.441772   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:43.940702   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:45.941793   37212 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace has status "Ready":"False"
	I0906 15:54:47.436849   37212 pod_ready.go:81] duration metric: took 4m0.005822558s waiting for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" ...
	E0906 15:54:47.436870   37212 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-wnhzc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 15:54:47.436887   37212 pod_ready.go:38] duration metric: took 4m9.498472217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:54:47.436919   37212 kubeadm.go:631] restartCluster took 4m19.144412803s
	W0906 15:54:47.437043   37212 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 15:54:47.437069   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 15:54:51.743270   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.306176563s)
	I0906 15:54:51.743330   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:54:51.752980   37212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:54:51.760278   37212 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 15:54:51.760326   37212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:54:51.767387   37212 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 15:54:51.767414   37212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 15:54:51.808770   37212 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 15:54:51.808802   37212 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 15:54:51.904557   37212 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 15:54:51.904648   37212 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 15:54:51.904725   37212 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 15:54:52.025732   37212 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 15:54:52.050514   37212 out.go:204]   - Generating certificates and keys ...
	I0906 15:54:52.050582   37212 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 15:54:52.050668   37212 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 15:54:52.050742   37212 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 15:54:52.050789   37212 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 15:54:52.050842   37212 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 15:54:52.050887   37212 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 15:54:52.050939   37212 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 15:54:52.050986   37212 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 15:54:52.051056   37212 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 15:54:52.051129   37212 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 15:54:52.051161   37212 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 15:54:52.051204   37212 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 15:54:52.104655   37212 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 15:54:52.266933   37212 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 15:54:52.455099   37212 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 15:54:52.599889   37212 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 15:54:52.611289   37212 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 15:54:52.611867   37212 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 15:54:52.611907   37212 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 15:54:52.691695   37212 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 15:54:51.807304   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:54:51.807458   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:54:52.713079   37212 out.go:204]   - Booting up control plane ...
	I0906 15:54:52.713174   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 15:54:52.713236   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 15:54:52.713297   37212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 15:54:52.713374   37212 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 15:54:52.713513   37212 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 15:54:58.196526   37212 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503547 seconds
	I0906 15:54:58.196654   37212 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 15:54:58.203434   37212 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 15:54:58.718698   37212 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 15:54:58.718859   37212 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220906154915-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 15:54:59.224635   37212 kubeadm.go:317] [bootstrap-token] Using token: g5os1h.xfjbuvdd1xawa0ky
	I0906 15:54:59.261788   37212 out.go:204]   - Configuring RBAC rules ...
	I0906 15:54:59.262049   37212 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 15:54:59.262337   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 15:54:59.268841   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 15:54:59.270852   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 15:54:59.272955   37212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 15:54:59.274702   37212 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 15:54:59.281328   37212 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 15:54:59.432647   37212 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 15:54:59.632647   37212 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 15:54:59.633705   37212 kubeadm.go:317] 
	I0906 15:54:59.633803   37212 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 15:54:59.633816   37212 kubeadm.go:317] 
	I0906 15:54:59.633881   37212 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 15:54:59.633888   37212 kubeadm.go:317] 
	I0906 15:54:59.633907   37212 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 15:54:59.633950   37212 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 15:54:59.633984   37212 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 15:54:59.633989   37212 kubeadm.go:317] 
	I0906 15:54:59.634058   37212 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 15:54:59.634067   37212 kubeadm.go:317] 
	I0906 15:54:59.634138   37212 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 15:54:59.634148   37212 kubeadm.go:317] 
	I0906 15:54:59.634185   37212 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 15:54:59.634235   37212 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 15:54:59.634291   37212 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 15:54:59.634298   37212 kubeadm.go:317] 
	I0906 15:54:59.634350   37212 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 15:54:59.634399   37212 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 15:54:59.634404   37212 kubeadm.go:317] 
	I0906 15:54:59.634457   37212 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634532   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 15:54:59.634554   37212 kubeadm.go:317] 	--control-plane 
	I0906 15:54:59.634562   37212 kubeadm.go:317] 
	I0906 15:54:59.634628   37212 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 15:54:59.634634   37212 kubeadm.go:317] 
	I0906 15:54:59.634703   37212 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token g5os1h.xfjbuvdd1xawa0ky \
	I0906 15:54:59.634778   37212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 15:54:59.637971   37212 kubeadm.go:317] W0906 22:54:51.815271    7827 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 15:54:59.638087   37212 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 15:54:59.638192   37212 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 15:54:59.638305   37212 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:54:59.638322   37212 cni.go:95] Creating CNI manager for ""
	I0906 15:54:59.638333   37212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:54:59.638353   37212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:54:59.638418   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.638453   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=default-k8s-different-port-20220906154915-22187 minikube.k8s.io/updated_at=2022_09_06T15_54_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:54:59.652946   37212 ops.go:34] apiserver oom_adj: -16
	I0906 15:54:59.765132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.356297   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:00.855510   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.356044   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:01.855680   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.357560   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:02.855496   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.356576   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:03.857064   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.356922   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:04.855648   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.355509   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:05.856812   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.356378   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:06.856487   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.357002   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:07.855628   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.357475   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:08.855615   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.356132   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:09.856796   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.355518   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:10.855528   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.356121   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.855538   37212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 15:55:11.915907   37212 kubeadm.go:1046] duration metric: took 12.277509448s to wait for elevateKubeSystemPrivileges.
	I0906 15:55:11.915924   37212 kubeadm.go:398] StartCluster complete in 4m43.659305517s
	I0906 15:55:11.915940   37212 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:11.916016   37212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:55:11.916547   37212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:55:12.432639   37212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220906154915-22187" rescaled to 1
	I0906 15:55:12.432672   37212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:55:12.432680   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:55:12.432706   37212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:55:12.456099   37212 out.go:177] * Verifying Kubernetes components...
	I0906 15:55:12.432831   37212 config.go:180] Loaded profile config "default-k8s-different-port-20220906154915-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:55:12.456163   37212 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456171   37212 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456174   37212 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.456176   37212 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.499149   37212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 15:55:12.529511   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:12.529526   37212 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529528   37212 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529535   37212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220906154915-22187"
	W0906 15:55:12.529545   37212 addons.go:162] addon dashboard should already be in state true
	W0906 15:55:12.529553   37212 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:55:12.529626   37212 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.529660   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	W0906 15:55:12.529689   37212 addons.go:162] addon metrics-server should already be in state true
	I0906 15:55:12.529658   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.529766   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.530198   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531221   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.531900   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.532011   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.549127   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.680879   37212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:55:12.640947   37212 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:12.661070   37212 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	W0906 15:55:12.680984   37212 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:55:12.718152   37212 host.go:66] Checking if "default-k8s-different-port-20220906154915-22187" exists ...
	I0906 15:55:12.718210   37212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:12.775898   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:55:12.755048   37212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776017   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.850072   37212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:55:12.776417   37212 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220906154915-22187 --format={{.State.Status}}
	I0906 15:55:12.813187   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:55:12.829884   37212 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.887424   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:55:12.887573   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:55:12.887589   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:55:12.887599   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.888232   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.901144   37212 node_ready.go:49] node "default-k8s-different-port-20220906154915-22187" has status "Ready":"True"
	I0906 15:55:12.901168   37212 node_ready.go:38] duration metric: took 13.7942ms waiting for node "default-k8s-different-port-20220906154915-22187" to be "Ready" ...
	I0906 15:55:12.901178   37212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:12.916307   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:12.938564   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.974260   37212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:12.974271   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:55:12.974329   37212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220906154915-22187
	I0906 15:55:12.976572   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:12.979654   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.045815   37212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59715 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/default-k8s-different-port-20220906154915-22187/id_rsa Username:docker}
	I0906 15:55:13.108896   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:55:13.121508   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:55:13.121527   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:55:13.131848   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:55:13.131866   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:55:13.209761   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:55:13.209774   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:55:13.223186   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:55:13.302916   37212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.302940   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:55:13.309224   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:55:13.309237   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:55:13.327162   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:55:13.395379   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:55:13.428686   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:55:13.522685   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:55:13.522699   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:55:13.626615   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:55:13.626632   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:55:13.721707   37212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.192261292s)
	I0906 15:55:13.721737   37212 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 15:55:13.794268   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:55:13.794285   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:55:13.920312   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:55:13.920326   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:55:14.005171   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:55:14.005188   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:55:14.022831   37212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.022846   37212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:55:14.105185   37212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:55:14.326598   37212 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220906154915-22187"
	I0906 15:55:14.935413   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:15.141698   37212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 15:55:15.178672   37212 addons.go:414] enableAddons completed in 2.745959213s
	I0906 15:55:16.937654   37212 pod_ready.go:102] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"False"
	I0906 15:55:17.935795   37212 pod_ready.go:92] pod "coredns-565d847f94-6g7xm" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:17.935809   37212 pod_ready.go:81] duration metric: took 5.01946616s waiting for pod "coredns-565d847f94-6g7xm" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:17.935816   37212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446882   37212 pod_ready.go:92] pod "coredns-565d847f94-q4mb7" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.446896   37212 pod_ready.go:81] duration metric: took 511.073117ms waiting for pod "coredns-565d847f94-q4mb7" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.446904   37212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451838   37212 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.451848   37212 pod_ready.go:81] duration metric: took 4.936622ms waiting for pod "etcd-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.451854   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457179   37212 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.457189   37212 pod_ready.go:81] duration metric: took 5.329087ms waiting for pod "kube-apiserver-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.457196   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461768   37212 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.461778   37212 pod_ready.go:81] duration metric: took 4.575554ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.461784   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733119   37212 pod_ready.go:92] pod "kube-proxy-tmfkn" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:18.733129   37212 pod_ready.go:81] duration metric: took 271.339141ms waiting for pod "kube-proxy-tmfkn" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:18.733137   37212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132361   37212 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:55:19.132371   37212 pod_ready.go:81] duration metric: took 399.227312ms waiting for pod "kube-scheduler-default-k8s-different-port-20220906154915-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:55:19.132376   37212 pod_ready.go:38] duration metric: took 6.231173997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:55:19.132390   37212 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:55:19.132442   37212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:55:19.143302   37212 api_server.go:71] duration metric: took 6.710591857s to wait for apiserver process to appear ...
	I0906 15:55:19.143315   37212 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:55:19.143323   37212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59719/healthz ...
	I0906 15:55:19.148529   37212 api_server.go:266] https://127.0.0.1:59719/healthz returned 200:
	ok
	I0906 15:55:19.149651   37212 api_server.go:140] control plane version: v1.25.0
	I0906 15:55:19.149659   37212 api_server.go:130] duration metric: took 6.340438ms to wait for apiserver health ...
	I0906 15:55:19.149665   37212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:55:19.338022   37212 system_pods.go:59] 9 kube-system pods found
	I0906 15:55:19.338037   37212 system_pods.go:61] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.338041   37212 system_pods.go:61] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.338045   37212 system_pods.go:61] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.338049   37212 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.338053   37212 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.338059   37212 system_pods.go:61] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.338064   37212 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.338069   37212 system_pods.go:61] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.338078   37212 system_pods.go:61] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.338082   37212 system_pods.go:74] duration metric: took 188.413972ms to wait for pod list to return data ...
	I0906 15:55:19.338089   37212 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:55:19.532218   37212 default_sa.go:45] found service account: "default"
	I0906 15:55:19.532231   37212 default_sa.go:55] duration metric: took 194.136492ms for default service account to be created ...
	I0906 15:55:19.532236   37212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 15:55:19.735925   37212 system_pods.go:86] 9 kube-system pods found
	I0906 15:55:19.735939   37212 system_pods.go:89] "coredns-565d847f94-6g7xm" [cd12e82d-279c-477c-82a6-77663bdacc76] Running
	I0906 15:55:19.735944   37212 system_pods.go:89] "coredns-565d847f94-q4mb7" [9e68ed76-3285-4c00-9e6f-54f5de87e7a4] Running
	I0906 15:55:19.735947   37212 system_pods.go:89] "etcd-default-k8s-different-port-20220906154915-22187" [e5c83ff5-8057-4ec5-9c5e-268a762eb62a] Running
	I0906 15:55:19.735957   37212 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220906154915-22187" [ac2adb4b-dbde-47e6-9e92-97a6c9ee96f4] Running
	I0906 15:55:19.735962   37212 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220906154915-22187" [0163f669-ebfc-46ce-aa87-ffce3904c5e1] Running
	I0906 15:55:19.735968   37212 system_pods.go:89] "kube-proxy-tmfkn" [c9364049-c8f3-468a-867e-50133dcc208b] Running
	I0906 15:55:19.735972   37212 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220906154915-22187" [887554cf-68d1-4e4f-bc6f-0d65eb7e3d28] Running
	I0906 15:55:19.735977   37212 system_pods.go:89] "metrics-server-5c8fd5cf8-2pdjw" [b88a6579-9359-435f-8fb4-b7ec5c7f7d52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:55:19.735981   37212 system_pods.go:89] "storage-provisioner" [da22f144-e345-4b66-b770-500d22a98dfc] Running
	I0906 15:55:19.735986   37212 system_pods.go:126] duration metric: took 203.746511ms to wait for k8s-apps to be running ...
	I0906 15:55:19.735991   37212 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 15:55:19.736042   37212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:55:19.746224   37212 system_svc.go:56] duration metric: took 10.227063ms WaitForService to wait for kubelet.
	I0906 15:55:19.746239   37212 kubeadm.go:573] duration metric: took 7.313531095s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 15:55:19.746256   37212 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:55:19.935919   37212 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:55:19.935936   37212 node_conditions.go:123] node cpu capacity is 6
	I0906 15:55:19.935944   37212 node_conditions.go:105] duration metric: took 189.682536ms to run NodePressure ...
	I0906 15:55:19.935956   37212 start.go:216] waiting for startup goroutines ...
	I0906 15:55:19.974175   37212 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:55:20.010226   37212 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220906154915-22187" cluster and "default" namespace by default
	I0906 15:55:31.779661   36618 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 15:55:31.779822   36618 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 15:55:31.779830   36618 kubeadm.go:317] 
	I0906 15:55:31.779860   36618 kubeadm.go:317] Unfortunately, an error has occurred:
	I0906 15:55:31.779889   36618 kubeadm.go:317] 	timed out waiting for the condition
	I0906 15:55:31.779894   36618 kubeadm.go:317] 
	I0906 15:55:31.779921   36618 kubeadm.go:317] This error is likely caused by:
	I0906 15:55:31.779960   36618 kubeadm.go:317] 	- The kubelet is not running
	I0906 15:55:31.780052   36618 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 15:55:31.780063   36618 kubeadm.go:317] 
	I0906 15:55:31.780169   36618 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 15:55:31.780219   36618 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0906 15:55:31.780247   36618 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0906 15:55:31.780251   36618 kubeadm.go:317] 
	I0906 15:55:31.780328   36618 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 15:55:31.780416   36618 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0906 15:55:31.780495   36618 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0906 15:55:31.780559   36618 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0906 15:55:31.780661   36618 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0906 15:55:31.780715   36618 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0906 15:55:31.783923   36618 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0906 15:55:31.784047   36618 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
	I0906 15:55:31.784168   36618 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 15:55:31.784249   36618 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 15:55:31.784306   36618 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0906 15:55:31.784333   36618 kubeadm.go:398] StartCluster complete in 7m59.255788376s
	I0906 15:55:31.784406   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 15:55:31.816119   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.816135   36618 logs.go:276] No container was found matching "kube-apiserver"
	I0906 15:55:31.816207   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 15:55:31.852948   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.852961   36618 logs.go:276] No container was found matching "etcd"
	I0906 15:55:31.853021   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 15:55:31.884845   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.884856   36618 logs.go:276] No container was found matching "coredns"
	I0906 15:55:31.884911   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 15:55:31.917054   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.917068   36618 logs.go:276] No container was found matching "kube-scheduler"
	I0906 15:55:31.917132   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 15:55:31.948382   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.948395   36618 logs.go:276] No container was found matching "kube-proxy"
	I0906 15:55:31.948451   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0906 15:55:31.982328   36618 logs.go:274] 0 containers: []
	W0906 15:55:31.982339   36618 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0906 15:55:31.982387   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 15:55:32.013438   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.013450   36618 logs.go:276] No container was found matching "storage-provisioner"
	I0906 15:55:32.013510   36618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 15:55:32.044826   36618 logs.go:274] 0 containers: []
	W0906 15:55:32.044840   36618 logs.go:276] No container was found matching "kube-controller-manager"
	I0906 15:55:32.044847   36618 logs.go:123] Gathering logs for kubelet ...
	I0906 15:55:32.044854   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 15:55:32.085941   36618 logs.go:123] Gathering logs for dmesg ...
	I0906 15:55:32.085955   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 15:55:32.097748   36618 logs.go:123] Gathering logs for describe nodes ...
	I0906 15:55:32.097762   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 15:55:32.160044   36618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 15:55:32.160054   36618 logs.go:123] Gathering logs for Docker ...
	I0906 15:55:32.160060   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0906 15:55:32.174249   36618 logs.go:123] Gathering logs for container status ...
	I0906 15:55:32.174260   36618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 15:55:34.234529   36618 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060250655s)
	W0906 15:55:34.234640   36618 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 15:55:34.234654   36618 out.go:239] * 
	W0906 15:55:34.234769   36618 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.234800   36618 out.go:239] * 
	W0906 15:55:34.235311   36618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 15:55:34.299125   36618 out.go:177] 
	W0906 15:55:34.342220   36618 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 15:55:34.342329   36618 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 15:55:34.342385   36618 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 15:55:34.385240   36618 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:50:24 UTC, end at Tue 2022-09-06 22:56:12 UTC. --
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.800591872Z" level=info msg="ignoring event" container=4b235df16e6994fa3ef897cbf0a8e6d69de49878a76f92fe70b36ff2f00e56d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.907200741Z" level=info msg="ignoring event" container=0fc0a1ce243fbbb6e3fa81d97cd0c596b1d25cf1700320e502e789e9a0667785 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:50 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:50.974871558Z" level=info msg="ignoring event" container=ee9a9f6db46303cbe9530cce49f01329f4b49afa485c0db0f5351fe9f86346ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.044209645Z" level=info msg="ignoring event" container=2b8ccbe97df0dd741bc0c8e562761eba97dfe08b52008409521428b2e56a6879 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.107481448Z" level=info msg="ignoring event" container=3e74234684fdbe4487659498dc5474f5d575aeb84113397e81baff21f1ef0358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.188878823Z" level=info msg="ignoring event" container=d8a639143af94543d1a9cc7b19ec897f39a230396d154559b4675fd9177a7d59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.258227856Z" level=info msg="ignoring event" container=bd635a365f6142d60f9c92baff1a46c39d752a10fe1c612058c308092c5dcccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:54:51 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:54:51.377184294Z" level=info msg="ignoring event" container=f2205f191166bce5eb516411fa4cb06f95b1b0967ef4a6276cab71bd69551b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.416732067Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.417303083Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:15 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:15.418563655Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:16 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:16.148234964Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 22:55:20 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:20.407146682Z" level=info msg="ignoring event" container=31d7700143a426b8af1544bb1bf9357019b278e702a8a58bc28705eb91a6f642 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:20 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:20.553452925Z" level=info msg="ignoring event" container=45ec75e68f94aaf8ee9d8da70e4326ac5369d9e736d9499cad4d66fc8f8f6826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:21 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:21.458228531Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:55:21 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:21.627109374Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 22:55:24 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:24.918173262Z" level=info msg="ignoring event" container=ad3a30b30ec958e3d18c03553fa87e0baf6dcf5e03dc50c8dc9c78aa77b55f57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:25 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:25.611877865Z" level=info msg="ignoring event" container=07e1a936211443892c9b484fefae3f75027257e3b2f7b9de565e37fe397ae1de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.585401160Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.585443289Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:55:31 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:55:31.586966538Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:56:09.624960316Z" level=info msg="ignoring event" container=0e1a39103c72d03baf7553906eb73a58617d26726101c478bbc42b9fe8158351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:56:09.867656223Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:56:09.867703148Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 dockerd[545]: time="2022-09-06T22:56:09.868989112Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	0e1a39103c72d       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   851aacc203834
	e3c507c5d7b13       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   52 seconds ago       Running             kubernetes-dashboard        0                   5dcc0b092221f
	bf51ca1279c61       6e38f40d628db                                                                                    58 seconds ago       Running             storage-provisioner         0                   c86003c5a89a8
	930fcd99160b8       5185b96f0becf                                                                                    58 seconds ago       Running             coredns                     0                   e3482c66aa743
	d42a44c5bc034       58a9a0c6d96f2                                                                                    About a minute ago   Running             kube-proxy                  0                   3fb6e6ae91df8
	2f3e869c727f9       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   851acb1dbeb1e
	e109e6fe94bd5       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   94e3e9b2b7bdf
	2087e58b9ebd8       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   567d18333cc46
	00d241b0787a8       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   817f79dd0b5b0
	
	* 
	* ==> coredns [930fcd99160b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220906154915-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220906154915-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220906154915-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_54_59_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:54:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220906154915-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:54:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 06 Sep 2022 22:56:06 +0000   Tue, 06 Sep 2022 22:56:06 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220906154915-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                9e7bcc06-4367-4f4b-bc76-5523d39b1adc
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-6g7xm                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-default-k8s-different-port-20220906154915-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220906154915-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220906154915-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-tmfkn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220906154915-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-5c8fd5cf8-2pdjw                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-xqs4c                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-q5gxc                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientPID     79s (x4 over 79s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  79s (x4 over 79s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    79s (x4 over 79s)  kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                73s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeReady
	  Normal  NodeHasSufficientPID     73s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           61s                node-controller  Node default-k8s-different-port-20220906154915-22187 event: Registered Node default-k8s-different-port-20220906154915-22187 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeNotReady             6s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6s                 kubelet          Node default-k8s-different-port-20220906154915-22187 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [2087e58b9ebd] <==
	* {"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:54:54.064Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:54:54.654Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220906154915-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:54:54.655Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:54:54.656Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:54:54.656Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:55:14.198Z","caller":"traceutil/trace.go:171","msg":"trace[1678117136] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"152.38418ms","start":"2022-09-06T22:55:14.046Z","end":"2022-09-06T22:55:14.198Z","steps":["trace[1678117136] 'process raft request'  (duration: 152.182575ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-06T22:55:14.198Z","caller":"traceutil/trace.go:171","msg":"trace[1387182222] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"153.319714ms","start":"2022-09-06T22:55:14.045Z","end":"2022-09-06T22:55:14.198Z","steps":["trace[1387182222] 'process raft request'  (duration: 78.965284ms)","trace[1387182222] 'compare'  (duration: 73.84346ms)"],"step_count":2}
	{"level":"warn","ts":"2022-09-06T22:55:20.177Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"210.321556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-09-06T22:55:20.177Z","caller":"traceutil/trace.go:171","msg":"trace[1498113217] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"210.417868ms","start":"2022-09-06T22:55:19.967Z","end":"2022-09-06T22:55:20.177Z","steps":["trace[1498113217] 'range keys from in-memory index tree'  (duration: 210.269251ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:56:13 up  1:12,  0 users,  load average: 1.34, 0.97, 1.00
	Linux default-k8s-different-port-20220906154915-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [00d241b0787a] <==
	* I0906 22:54:57.746199       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 22:54:57.746258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 22:54:58.030080       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:54:58.054973       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:54:58.181586       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 22:54:58.185411       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 22:54:58.186157       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:54:58.188942       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 22:54:58.830587       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:54:59.443182       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:54:59.448747       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 22:54:59.455473       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:54:59.536259       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:55:12.216141       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 22:55:12.568104       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:55:14.315009       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.247.14]
	W0906 22:55:15.044009       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:55:15.044045       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:55:15.044051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 22:55:15.044074       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:55:15.044103       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:55:15.044701       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.54.7]
	I0906 22:55:15.045059       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 22:55:15.101423       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.123.82]
	
	* 
	* ==> kube-controller-manager [e109e6fe94bd] <==
	* I0906 22:55:12.800074       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-6g7xm"
	I0906 22:55:12.920760       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-q4mb7"
	I0906 22:55:14.042818       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:55:14.045382       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0906 22:55:14.199803       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0906 22:55:14.205170       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-2pdjw"
	I0906 22:55:14.982465       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 22:55:14.988147       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	I0906 22:55:14.989815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:14.992572       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:14.997128       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:14.997758       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:15.001180       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.001236       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:15.003651       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:15.003655       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 22:55:15.006151       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.006191       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 22:55:15.006173       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 22:55:15.006202       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 22:55:15.017438       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-q5gxc"
	I0906 22:55:15.031485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-xqs4c"
	E0906 22:56:05.696861       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 22:56:05.759356       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0906 22:56:10.688515       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [d42a44c5bc03] <==
	* I0906 22:55:13.324022       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:55:13.324104       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:55:13.324123       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:55:13.423105       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:55:13.423154       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:55:13.423161       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:55:13.423191       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:55:13.423227       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:55:13.423301       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:55:13.423440       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:55:13.423466       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:55:13.424767       1 config.go:317] "Starting service config controller"
	I0906 22:55:13.424784       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:55:13.424815       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:55:13.424821       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:55:13.424997       1 config.go:444] "Starting node config controller"
	I0906 22:55:13.425006       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:55:13.525532       1 shared_informer.go:262] Caches are synced for node config
	I0906 22:55:13.525586       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:55:13.525615       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [2f3e869c727f] <==
	* W0906 22:54:56.832394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:56.832454       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:56.832465       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 22:54:56.832478       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 22:54:56.832395       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:54:56.832693       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 22:54:56.832704       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:54:56.832715       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:54:56.832789       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 22:54:56.832849       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 22:54:56.832917       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 22:54:56.832975       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 22:54:57.649360       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:57.649528       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:57.680727       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:54:57.680902       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:54:57.730731       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 22:54:57.730772       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 22:54:57.829528       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:54:57.829565       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:54:57.899679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:54:57.899751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:54:57.947161       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:54:57.947249       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0906 22:54:59.728656       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:50:24 UTC, end at Tue 2022-09-06 22:56:13 UTC. --
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162067   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6rz\" (UniqueName: \"kubernetes.io/projected/c9364049-c8f3-468a-867e-50133dcc208b-kube-api-access-5s6rz\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162102   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb5g\" (UniqueName: \"kubernetes.io/projected/b88a6579-9359-435f-8fb4-b7ec5c7f7d52-kube-api-access-hzb5g\") pod \"metrics-server-5c8fd5cf8-2pdjw\" (UID: \"b88a6579-9359-435f-8fb4-b7ec5c7f7d52\") " pod="kube-system/metrics-server-5c8fd5cf8-2pdjw"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162137   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pgf8\" (UniqueName: \"kubernetes.io/projected/cd12e82d-279c-477c-82a6-77663bdacc76-kube-api-access-5pgf8\") pod \"coredns-565d847f94-6g7xm\" (UID: \"cd12e82d-279c-477c-82a6-77663bdacc76\") " pod="kube-system/coredns-565d847f94-6g7xm"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162155   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b88a6579-9359-435f-8fb4-b7ec5c7f7d52-tmp-dir\") pod \"metrics-server-5c8fd5cf8-2pdjw\" (UID: \"b88a6579-9359-435f-8fb4-b7ec5c7f7d52\") " pod="kube-system/metrics-server-5c8fd5cf8-2pdjw"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162209   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9364049-c8f3-468a-867e-50133dcc208b-kube-proxy\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162291   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8bs\" (UniqueName: \"kubernetes.io/projected/da22f144-e345-4b66-b770-500d22a98dfc-kube-api-access-ht8bs\") pod \"storage-provisioner\" (UID: \"da22f144-e345-4b66-b770-500d22a98dfc\") " pod="kube-system/storage-provisioner"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162332   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f06bbe71-2d02-408c-b415-734366bf4723-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-xqs4c\" (UID: \"f06bbe71-2d02-408c-b415-734366bf4723\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-xqs4c"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162348   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9364049-c8f3-468a-867e-50133dcc208b-lib-modules\") pod \"kube-proxy-tmfkn\" (UID: \"c9364049-c8f3-468a-867e-50133dcc208b\") " pod="kube-system/kube-proxy-tmfkn"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162364   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5c52cb40-b9c3-4910-87ba-7c97614ca12e-tmp-volume\") pod \"kubernetes-dashboard-54596f475f-q5gxc\" (UID: \"5c52cb40-b9c3-4910-87ba-7c97614ca12e\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-q5gxc"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162456   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8grc\" (UniqueName: \"kubernetes.io/projected/5c52cb40-b9c3-4910-87ba-7c97614ca12e-kube-api-access-d8grc\") pod \"kubernetes-dashboard-54596f475f-q5gxc\" (UID: \"5c52cb40-b9c3-4910-87ba-7c97614ca12e\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-q5gxc"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162517   10978 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da22f144-e345-4b66-b770-500d22a98dfc-tmp\") pod \"storage-provisioner\" (UID: \"da22f144-e345-4b66-b770-500d22a98dfc\") " pod="kube-system/storage-provisioner"
	Sep 06 22:56:07 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:07.162535   10978 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:08.342232   10978 request.go:601] Waited for 1.122758878s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.434183   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.576563   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.799715   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:08 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:08.966375   10978 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220906154915-22187\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220906154915-22187"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:09.246710   10978 scope.go:115] "RemoveContainer" containerID="07e1a936211443892c9b484fefae3f75027257e3b2f7b9de565e37fe397ae1de"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869470   10978 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869529   10978 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869634   10978 kuberuntime_manager.go:862] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hzb5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c8fd5cf8-2pdjw_kube-system(b88a6579-9359-435f-8fb4-b7ec5c7f7d52): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 06 22:56:09 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:09.869661   10978 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c8fd5cf8-2pdjw" podUID=b88a6579-9359-435f-8fb4-b7ec5c7f7d52
	Sep 06 22:56:10 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:10.242956   10978 scope.go:115] "RemoveContainer" containerID="07e1a936211443892c9b484fefae3f75027257e3b2f7b9de565e37fe397ae1de"
	Sep 06 22:56:10 default-k8s-different-port-20220906154915-22187 kubelet[10978]: I0906 22:56:10.243168   10978 scope.go:115] "RemoveContainer" containerID="0e1a39103c72d03baf7553906eb73a58617d26726101c478bbc42b9fe8158351"
	Sep 06 22:56:10 default-k8s-different-port-20220906154915-22187 kubelet[10978]: E0906 22:56:10.243316   10978 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b94984548-xqs4c_kubernetes-dashboard(f06bbe71-2d02-408c-b415-734366bf4723)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-xqs4c" podUID=f06bbe71-2d02-408c-b415-734366bf4723
	
	* 
	* ==> kubernetes-dashboard [e3c507c5d7b1] <==
	* 2022/09/06 22:55:21 Starting overwatch
	2022/09/06 22:55:21 Using namespace: kubernetes-dashboard
	2022/09/06 22:55:21 Using in-cluster config to connect to apiserver
	2022/09/06 22:55:21 Using secret token for csrf signing
	2022/09/06 22:55:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 22:55:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 22:55:21 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 22:55:21 Generating JWE encryption key
	2022/09/06 22:55:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 22:55:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 22:55:21 Initializing JWE encryption key from synchronized object
	2022/09/06 22:55:21 Creating in-cluster Sidecar client
	2022/09/06 22:55:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 22:55:21 Serving insecurely on HTTP port: 9090
	2022/09/06 22:56:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [bf51ca1279c6] <==
	* I0906 22:55:14.818131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:55:14.826526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:55:14.826708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:55:14.834051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:55:14.834099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ea33574-9df4-4cec-a23e-b315ede47166", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf became leader
	I0906 22:55:14.834205       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf!
	I0906 22:55:14.935344       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220906154915-22187_81c17666-4088-4160-a057-9d479a5092cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-2pdjw
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw: exit status 1 (56.88904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-2pdjw" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220906154915-22187 describe pod metrics-server-5c8fd5cf8-2pdjw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (42.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0906 15:55:44.044112   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:56:24.328621   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:56:37.573728   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:57:41.130283   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:57:41.293752   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:57:47.108734   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 15:57:47.472494   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:58:00.625394   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:58:15.160736   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:58:37.723352   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:58:49.030332   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:59:04.177479   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:59:10.165037   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:59:45.091236   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 15:59:56.186164   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:59:56.982641   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 16:00:00.527443   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.532657   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.543067   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.563258   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.603749   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.685020   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:00.762642   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 16:00:00.845687   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:01.166301   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:01.808456   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:00:03.088422   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:05.649422   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:10.771135   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:12.110721   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:00:21.011058   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:00:41.490737   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:00:44.032017   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:01:08.135900   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:01:19.234643   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 16:01:22.451189   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:01:24.317592   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:01:37.561577   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:02:07.143043   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:02:41.117612   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 16:02:41.281097   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 16:02:44.372651   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
E0906 16:02:47.095981   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:02:47.364719   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 16:02:47.459340   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:03:00.031202   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:03:37.710568   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:03:49.017156   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:04:04.345682   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:04:56.181609   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 16:04:56.978507   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:05:00.524810   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (429.75856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220906154143-22187" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:47:29.039125207Z",
	            "FinishedAt": "2022-09-06T22:47:26.139154051Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a2118a2c36e1b5c44aafe44f5808c04fdc08f7c9c97617d0abe3804e5920b4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59556"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a2118a2c36e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "b81530b6afb4e1c30b7c1e1d7bbcce0431a21d5b730d06b677fa03cd39f407d8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (416.295867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25: (3.65646531s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220906155820-22187      | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | disable-driver-mounts-20220906155820-22187                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:59:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:59:30.262038   38636 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:59:30.262188   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262193   38636 out.go:309] Setting ErrFile to fd 2...
	I0906 15:59:30.262197   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262308   38636 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:59:30.262744   38636 out.go:303] Setting JSON to false
	I0906 15:59:30.277675   38636 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10741,"bootTime":1662494429,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:59:30.277782   38636 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:59:30.299234   38636 out.go:177] * [embed-certs-20220906155821-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:59:30.341461   38636 notify.go:193] Checking for updates...
	I0906 15:59:30.363080   38636 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:59:30.384168   38636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:30.405458   38636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:59:30.426996   38636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:59:30.448360   38636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:59:30.470635   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:30.471106   38636 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:59:30.539352   38636 docker.go:137] docker version: linux-20.10.17
	I0906 15:59:30.539462   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.670843   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.614641007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.712577   38636 out.go:177] * Using the docker driver based on existing profile
	I0906 15:59:30.734837   38636 start.go:284] selected driver: docker
	I0906 15:59:30.734870   38636 start.go:808] validating driver "docker" against &{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.735025   38636 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:59:30.738354   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.869658   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.81424686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.869799   38636 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:59:30.869818   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:30.869829   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:30.869843   38636 start_flags.go:310] config:
	{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.912149   38636 out.go:177] * Starting control plane node embed-certs-20220906155821-22187 in cluster embed-certs-20220906155821-22187
	I0906 15:59:30.933415   38636 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:59:30.954429   38636 out.go:177] * Pulling base image ...
	I0906 15:59:31.001627   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:31.001689   38636 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:59:31.001724   38636 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:59:31.001744   38636 cache.go:57] Caching tarball of preloaded images
	I0906 15:59:31.001934   38636 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:59:31.001957   38636 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:59:31.002893   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.066643   38636 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:59:31.066664   38636 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:59:31.066675   38636 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:59:31.066736   38636 start.go:364] acquiring machines lock for embed-certs-20220906155821-22187: {Name:mkf641e2928acfedb898f07b24fd168dccdc0551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:59:31.066861   38636 start.go:368] acquired machines lock for "embed-certs-20220906155821-22187" in 104.801µs
	I0906 15:59:31.066880   38636 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:59:31.066891   38636 fix.go:55] fixHost starting: 
	I0906 15:59:31.067105   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.130023   38636 fix.go:103] recreateIfNeeded on embed-certs-20220906155821-22187: state=Stopped err=<nil>
	W0906 15:59:31.130050   38636 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:59:31.173435   38636 out.go:177] * Restarting existing docker container for "embed-certs-20220906155821-22187" ...
	I0906 15:59:31.194813   38636 cli_runner.go:164] Run: docker start embed-certs-20220906155821-22187
	I0906 15:59:31.539043   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.604033   38636 kic.go:415] container "embed-certs-20220906155821-22187" state is running.
	I0906 15:59:31.604697   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:31.675958   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.676353   38636 machine.go:88] provisioning docker machine ...
	I0906 15:59:31.676379   38636 ubuntu.go:169] provisioning hostname "embed-certs-20220906155821-22187"
	I0906 15:59:31.676439   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.744270   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.744484   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.744500   38636 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220906155821-22187 && echo "embed-certs-20220906155821-22187" | sudo tee /etc/hostname
	I0906 15:59:31.866514   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220906155821-22187
	
	I0906 15:59:31.866600   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.931384   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.931532   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.931548   38636 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220906155821-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220906155821-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220906155821-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:59:32.043786   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.043809   38636 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:59:32.043831   38636 ubuntu.go:177] setting up certificates
	I0906 15:59:32.043843   38636 provision.go:83] configureAuth start
	I0906 15:59:32.043910   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:32.109953   38636 provision.go:138] copyHostCerts
	I0906 15:59:32.110077   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:59:32.110087   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:59:32.110175   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:59:32.110375   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:59:32.110389   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:59:32.110445   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:59:32.110625   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:59:32.110632   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:59:32.110688   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:59:32.110800   38636 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220906155821-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220906155821-22187]
	I0906 15:59:32.234910   38636 provision.go:172] copyRemoteCerts
	I0906 15:59:32.234973   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:59:32.235024   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.301797   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:32.384511   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:59:32.404630   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0906 15:59:32.423185   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:59:32.442534   38636 provision.go:86] duration metric: configureAuth took 398.671593ms
	I0906 15:59:32.442548   38636 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:59:32.442701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:32.442763   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.508255   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.508405   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.508426   38636 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:59:32.623407   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:59:32.623421   38636 ubuntu.go:71] root file system type: overlay
	I0906 15:59:32.623580   38636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:59:32.623645   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.688184   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.688365   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.688423   38636 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:59:32.811885   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:59:32.811975   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.875508   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.875661   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.875674   38636 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:59:32.994163   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.994185   38636 machine.go:91] provisioned docker machine in 1.317820355s
	I0906 15:59:32.994196   38636 start.go:300] post-start starting for "embed-certs-20220906155821-22187" (driver="docker")
	I0906 15:59:32.994202   38636 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:59:32.994271   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:59:32.994324   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.059474   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.140744   38636 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:59:33.144225   38636 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:59:33.144240   38636 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:59:33.144246   38636 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:59:33.144251   38636 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:59:33.144259   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:59:33.144377   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:59:33.144520   38636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:59:33.144661   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:59:33.151919   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:33.171420   38636 start.go:303] post-start completed in 177.213688ms
	I0906 15:59:33.171494   38636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:59:33.171543   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.236286   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.315015   38636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:59:33.319490   38636 fix.go:57] fixHost completed within 2.252593148s
	I0906 15:59:33.319503   38636 start.go:83] releasing machines lock for "embed-certs-20220906155821-22187", held for 2.252628285s
	I0906 15:59:33.319576   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:33.383050   38636 ssh_runner.go:195] Run: systemctl --version
	I0906 15:59:33.383109   38636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:59:33.383135   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.383168   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.450261   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.450290   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.581030   38636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:59:33.590993   38636 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:59:33.591044   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:59:33.602299   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:59:33.615635   38636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:59:33.686986   38636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:59:33.757095   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:33.825045   38636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:59:34.060910   38636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:59:34.126849   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:34.192180   38636 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:59:34.202955   38636 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:59:34.203017   38636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:59:34.206437   38636 start.go:471] Will wait 60s for crictl version
	I0906 15:59:34.206478   38636 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:59:34.302591   38636 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:59:34.302665   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.337107   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.413758   38636 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:59:34.413920   38636 cli_runner.go:164] Run: docker exec -t embed-certs-20220906155821-22187 dig +short host.docker.internal
	I0906 15:59:34.525925   38636 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:59:34.526040   38636 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:59:34.530030   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.539714   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:34.603049   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:34.603134   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.633537   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.633555   38636 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:59:34.633621   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.664984   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.665007   38636 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:59:34.665091   38636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:59:34.744509   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:34.744522   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:34.744536   38636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:59:34.744551   38636 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220906155821-22187 NodeName:embed-certs-20220906155821-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:59:34.744685   38636 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220906155821-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:59:34.744775   38636 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220906155821-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:59:34.744831   38636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:59:34.752036   38636 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:59:34.752086   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:59:34.758799   38636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0906 15:59:34.770909   38636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:59:34.782836   38636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0906 15:59:34.795526   38636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:59:34.799185   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.808319   38636 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187 for IP: 192.168.76.2
	I0906 15:59:34.808436   38636 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:59:34.808488   38636 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:59:34.808571   38636 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/client.key
	I0906 15:59:34.808633   38636 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key.31bdca25
	I0906 15:59:34.808689   38636 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key
	I0906 15:59:34.808881   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:59:34.808918   38636 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:59:34.808930   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:59:34.808969   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:59:34.809000   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:59:34.809031   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:59:34.809090   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:34.809639   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:59:34.826558   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:59:34.842729   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:59:34.859199   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:59:34.875553   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:59:34.892683   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:59:34.909267   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:59:34.925586   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:59:34.943279   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:59:34.960570   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:59:34.976829   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:59:34.993916   38636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:59:35.006394   38636 ssh_runner.go:195] Run: openssl version
	I0906 15:59:35.011296   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:59:35.019183   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023061   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023103   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.028251   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:59:35.035345   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:59:35.042841   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046567   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046608   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.051690   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:59:35.060553   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:59:35.068394   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072508   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072548   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.078010   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:59:35.085338   38636 kubeadm.go:396] StartCluster: {Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:35.085441   38636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:35.114198   38636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:59:35.121678   38636 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:59:35.121695   38636 kubeadm.go:627] restartCluster start
	I0906 15:59:35.121742   38636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:59:35.129021   38636 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.129082   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:35.193199   38636 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220906155821-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:35.193376   38636 kubeconfig.go:127] "embed-certs-20220906155821-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:59:35.193711   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:59:35.195111   38636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:59:35.203811   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.203867   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.212091   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.413063   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.413147   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.423469   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.613039   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.613124   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.622019   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.812186   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.812267   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.821025   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.013432   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.013565   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.023339   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.212268   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.212352   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.220885   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.412199   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.412282   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.421519   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.612305   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.612379   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.621617   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.812269   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.812442   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.821913   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.012008   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.012110   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.021439   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.212257   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.212414   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.221560   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.412154   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.412213   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.421151   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.611593   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.611679   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.620601   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.813302   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.813472   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.822723   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.013156   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.013257   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.023237   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.212440   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.212572   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.221850   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.221859   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.221904   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.229570   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.229582   38636 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:59:38.229589   38636 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:59:38.229646   38636 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:38.258980   38636 docker.go:443] Stopping containers: [3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d]
	I0906 15:59:38.259054   38636 ssh_runner.go:195] Run: docker stop 3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d
	I0906 15:59:38.288935   38636 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:59:38.298782   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:59:38.306417   38636 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Sep  6 22:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:58 /etc/kubernetes/scheduler.conf
	
	I0906 15:59:38.306467   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:59:38.313578   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:59:38.320753   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.327712   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.327753   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.334398   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:59:38.341325   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.341375   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:59:38.349241   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356713   38636 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356727   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:38.408089   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.277607   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.401052   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.451457   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.539398   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:59:39.539455   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.047870   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.548175   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.608683   38636 api_server.go:71] duration metric: took 1.069984323s to wait for apiserver process to appear ...
	I0906 15:59:40.608708   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:59:40.608729   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:40.609867   38636 api_server.go:256] stopped: https://127.0.0.1:60239/healthz: Get "https://127.0.0.1:60239/healthz": EOF
	I0906 15:59:41.110592   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:43.701073   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0906 15:59:43.701130   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0906 15:59:44.108296   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.115415   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.115431   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:44.608093   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.613832   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.613847   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:45.107569   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:45.113794   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 15:59:45.120558   38636 api_server.go:140] control plane version: v1.25.0
	I0906 15:59:45.120569   38636 api_server.go:130] duration metric: took 4.51431829s to wait for apiserver health ...
	I0906 15:59:45.120576   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:45.120585   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:45.120601   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:59:45.128405   38636 system_pods.go:59] 8 kube-system pods found
	I0906 15:59:45.128423   38636 system_pods.go:61] "coredns-565d847f94-5frt9" [0228f046-b179-4812-a7e5-c83cecc89e27] Running
	I0906 15:59:45.128429   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [c2de4fd6-a0ae-4f47-85de-74bcc70bdb2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:59:45.128433   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [0d53a9a2-f2dc-45fa-bce1-519c55da2cc4] Running
	I0906 15:59:45.128438   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [7cbb7baa-b9f1-4603-a7b9-8048df17b8dd] Running
	I0906 15:59:45.128443   38636 system_pods.go:61] "kube-proxy-zss4k" [f1dfb3a5-6fa4-48cf-95fa-0132b1ec5c8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:59:45.128448   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [f8ba94d8-2b42-4733-b705-bc6af0b91d1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:59:45.128453   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-cdg6d" [65746fe5-91aa-47c8-a8b4-d4a67f749ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:59:45.128456   38636 system_pods.go:61] "storage-provisioner" [13ae32f7-198b-4787-8687-aa39b2729274] Running
	I0906 15:59:45.128460   38636 system_pods.go:74] duration metric: took 7.85832ms to wait for pod list to return data ...
	I0906 15:59:45.128467   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:59:45.131418   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:59:45.131433   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 15:59:45.131442   38636 node_conditions.go:105] duration metric: took 2.974231ms to run NodePressure ...
	I0906 15:59:45.131454   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:45.310869   38636 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315021   38636 kubeadm.go:778] kubelet initialised
	I0906 15:59:45.315032   38636 kubeadm.go:779] duration metric: took 4.153612ms waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315041   38636 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:59:45.320463   38636 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326126   38636 pod_ready.go:92] pod "coredns-565d847f94-5frt9" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:45.326135   38636 pod_ready.go:81] duration metric: took 5.66283ms waiting for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326141   38636 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:47.335090   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:49.334484   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:51.337017   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:52.335838   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.335849   38636 pod_ready.go:81] duration metric: took 7.012332045s waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.335855   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.339996   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.340004   38636 pod_ready.go:81] duration metric: took 4.146291ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.340010   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:54.351029   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:56.848497   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:58.850674   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:59.347750   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.347764   38636 pod_ready.go:81] duration metric: took 7.009427345s waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.347771   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351913   38636 pod_ready.go:92] pod "kube-proxy-zss4k" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.351921   38636 pod_ready.go:81] duration metric: took 4.135355ms waiting for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351927   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356071   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.356080   38636 pod_ready.go:81] duration metric: took 4.1483ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356087   38636 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	I0906 16:00:01.365786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:03.365913   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:05.864397   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:07.865924   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:10.365936   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:12.864158   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:14.864836   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:16.865572   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:19.366603   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:21.863612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:23.865028   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:26.363858   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:28.364294   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:30.366125   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:32.865447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:35.362385   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:37.364530   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:39.863069   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:41.864919   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:44.363145   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:46.366591   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:48.863143   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:50.866878   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:53.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:55.364778   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:57.862437   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:59.863334   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:02.363223   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:04.864534   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:07.363948   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:09.862744   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:11.864192   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:14.364619   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:16.365257   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:18.864438   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:21.362761   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:23.364003   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:25.365931   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:27.862946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:29.864228   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:32.362786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:34.863359   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:37.365906   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:39.863888   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:42.362860   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:44.862363   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:46.864406   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:48.864866   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:50.866596   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:53.363229   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:55.864354   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:58.362250   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:00.862470   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:02.863209   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:04.864281   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:07.363645   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:09.364150   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:11.864765   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:13.865201   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:16.363299   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:18.862729   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:21.365287   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:23.865162   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:26.363102   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:28.363739   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:30.863089   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:32.863103   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:34.863473   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:36.863492   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:39.362249   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:41.364199   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:43.866447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:46.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:48.363997   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:50.861977   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:52.867206   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:55.363783   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:57.364091   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:59.863017   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:01.866522   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:04.364983   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:06.862786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:08.864389   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:11.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:13.863197   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:16.364032   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:18.365612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:20.365946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:22.864232   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:25.362338   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:27.862126   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:29.863682   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:31.863972   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:33.865141   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:36.363045   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:38.865132   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:41.364203   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:43.863753   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:46.362812   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:48.864502   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:50.864576   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:53.363874   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:55.864828   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:58.362706   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:59.356938   38636 pod_ready.go:81] duration metric: took 4m0.004474184s waiting for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	E0906 16:03:59.356974   38636 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 16:03:59.356999   38636 pod_ready.go:38] duration metric: took 4m14.04989418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:03:59.357025   38636 kubeadm.go:631] restartCluster took 4m24.248696346s
	W0906 16:03:59.357127   38636 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 16:03:59.357149   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 16:04:03.698932   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.341781129s)
	I0906 16:04:03.698999   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:03.708822   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:04:03.716300   38636 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 16:04:03.716346   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:04:03.724386   38636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:04:03.724421   38636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 16:04:03.767530   38636 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 16:04:03.767567   38636 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 16:04:03.863194   38636 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:04:03.863313   38636 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:04:03.863392   38636 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:04:03.985091   38636 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:04:04.009873   38636 out.go:204]   - Generating certificates and keys ...
	I0906 16:04:04.009938   38636 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 16:04:04.010013   38636 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 16:04:04.010092   38636 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 16:04:04.010151   38636 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 16:04:04.010224   38636 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 16:04:04.010326   38636 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 16:04:04.010382   38636 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 16:04:04.010428   38636 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 16:04:04.010506   38636 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 16:04:04.010568   38636 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 16:04:04.010599   38636 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 16:04:04.010644   38636 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:04:04.112141   38636 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:04:04.428252   38636 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:04:04.781321   38636 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:04:04.891466   38636 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:04:04.902953   38636 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:04:04.903733   38636 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:04:04.903840   38636 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 16:04:04.989147   38636 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:04:05.010782   38636 out.go:204]   - Booting up control plane ...
	I0906 16:04:05.010866   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:04:05.010943   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:04:05.011017   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:04:05.011077   38636 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:04:05.011220   38636 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:04:10.494832   38636 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503264 seconds
	I0906 16:04:10.494909   38636 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:04:10.501767   38636 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:04:11.013788   38636 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:04:11.013935   38636 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220906155821-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:04:11.519763   38636 kubeadm.go:317] [bootstrap-token] Using token: fqw8zb.b3unh498onihp969
	I0906 16:04:11.556084   38636 out.go:204]   - Configuring RBAC rules ...
	I0906 16:04:11.556186   38636 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:04:11.556258   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:04:11.595414   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:04:11.597593   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:04:11.600071   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:04:11.602066   38636 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:04:11.608914   38636 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:04:11.744220   38636 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 16:04:11.927532   38636 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 16:04:11.936157   38636 kubeadm.go:317] 
	I0906 16:04:11.936239   38636 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 16:04:11.936251   38636 kubeadm.go:317] 
	I0906 16:04:11.936347   38636 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 16:04:11.936360   38636 kubeadm.go:317] 
	I0906 16:04:11.936397   38636 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 16:04:11.936483   38636 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:04:11.936535   38636 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:04:11.936545   38636 kubeadm.go:317] 
	I0906 16:04:11.936592   38636 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 16:04:11.936601   38636 kubeadm.go:317] 
	I0906 16:04:11.936648   38636 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:04:11.936660   38636 kubeadm.go:317] 
	I0906 16:04:11.936721   38636 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 16:04:11.936790   38636 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:04:11.936860   38636 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:04:11.936870   38636 kubeadm.go:317] 
	I0906 16:04:11.936973   38636 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:04:11.937041   38636 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 16:04:11.937049   38636 kubeadm.go:317] 
	I0906 16:04:11.937130   38636 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937205   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 16:04:11.937225   38636 kubeadm.go:317] 	--control-plane 
	I0906 16:04:11.937230   38636 kubeadm.go:317] 
	I0906 16:04:11.937297   38636 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:04:11.937303   38636 kubeadm.go:317] 
	I0906 16:04:11.937368   38636 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937490   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 16:04:11.940643   38636 kubeadm.go:317] W0906 23:04:03.783659    7834 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 16:04:11.940759   38636 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 16:04:11.940841   38636 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 16:04:11.940910   38636 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:04:11.940926   38636 cni.go:95] Creating CNI manager for ""
	I0906 16:04:11.940937   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:04:11.940954   38636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:04:11.941016   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:11.941027   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=embed-certs-20220906155821-22187 minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.053740   38636 ops.go:34] apiserver oom_adj: -16
	I0906 16:04:12.053787   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.629790   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.129829   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.630701   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.129844   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.629847   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.129938   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.630450   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.129967   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.629971   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.130355   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.631117   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.130189   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.630017   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.131937   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.630247   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.130104   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.630932   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.129928   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.630617   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.129774   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.629879   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.129817   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.631908   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.129837   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.629870   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.693469   38636 kubeadm.go:1046] duration metric: took 12.752546325s to wait for elevateKubeSystemPrivileges.
	I0906 16:04:24.693487   38636 kubeadm.go:398] StartCluster complete in 4m49.621602402s
	I0906 16:04:24.693510   38636 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:24.693618   38636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 16:04:24.694416   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:25.209438   38636 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220906155821-22187" rescaled to 1
	I0906 16:04:25.209475   38636 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:04:25.209488   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:04:25.209543   38636 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 16:04:25.248550   38636 out.go:177] * Verifying Kubernetes components...
	I0906 16:04:25.209701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 16:04:25.248613   38636 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248614   38636 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248617   38636 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248621   38636 addons.go:65] Setting dashboard=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.274065   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:04:25.323012   38636 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323027   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:25.323031   38636 addons.go:153] Setting addon dashboard=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323035   38636 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323041   38636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220906155821-22187"
	W0906 16:04:25.349810   38636 addons.go:162] addon storage-provisioner should already be in state true
	W0906 16:04:25.349817   38636 addons.go:162] addon metrics-server should already be in state true
	W0906 16:04:25.349808   38636 addons.go:162] addon dashboard should already be in state true
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350008   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350278   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351712   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351778   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351905   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.372800   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.479636   38636 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.537415   38636 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 16:04:25.500699   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 16:04:25.537466   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 16:04:25.579923   38636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:04:25.616492   38636 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.580057   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.618390   38636 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.675937   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 16:04:25.634198   38636 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.675960   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 16:04:25.654052   38636 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:25.676027   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0906 16:04:25.675946   38636 addons.go:162] addon default-storageclass should already be in state true
	I0906 16:04:25.676093   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676134   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676180   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.680582   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.694583   38636 node_ready.go:49] node "embed-certs-20220906155821-22187" has status "Ready":"True"
	I0906 16:04:25.694606   38636 node_ready.go:38] duration metric: took 18.642476ms waiting for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.694617   38636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:25.703428   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:25.769082   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.770815   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.771641   38636 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:25.771655   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:04:25.771721   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.771828   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.846515   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.908743   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 16:04:25.908759   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 16:04:25.923614   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:26.012628   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 16:04:26.012643   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 16:04:26.093532   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 16:04:26.093544   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 16:04:26.107106   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:26.111721   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 16:04:26.111737   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 16:04:26.197860   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 16:04:26.197879   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 16:04:26.222994   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 16:04:26.223005   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 16:04:26.290198   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.290219   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 16:04:26.306943   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 16:04:26.306956   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 16:04:26.389305   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.404625   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 16:04:26.404642   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 16:04:26.502869   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 16:04:26.502883   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 16:04:26.586788   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 16:04:26.586801   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 16:04:26.602971   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.602986   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 16:04:26.687833   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.989360   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639629341s)
	I0906 16:04:26.989402   38636 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 16:04:27.019123   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095487172s)
	I0906 16:04:27.105458   38636 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:27.721184   38636 pod_ready.go:92] pod "coredns-565d847f94-7hgsh" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:27.721200   38636 pod_ready.go:81] duration metric: took 2.017760025s waiting for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.721212   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.884983   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.197113945s)
	I0906 16:04:27.919906   38636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 16:04:27.956698   38636 addons.go:414] enableAddons completed in 2.747190456s
	I0906 16:04:29.734002   38636 pod_ready.go:102] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"False"
	I0906 16:04:30.232781   38636 pod_ready.go:92] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.232795   38636 pod_ready.go:81] duration metric: took 2.511583495s waiting for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.232802   38636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241018   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.241028   38636 pod_ready.go:81] duration metric: took 8.220934ms waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241036   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246347   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.246358   38636 pod_ready.go:81] duration metric: took 5.317921ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246365   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.251178   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.271910   38636 pod_ready.go:81] duration metric: took 25.535498ms waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.271928   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278165   38636 pod_ready.go:92] pod "kube-proxy-k97f9" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.278179   38636 pod_ready.go:81] duration metric: took 6.242796ms waiting for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278197   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630702   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.630713   38636 pod_ready.go:81] duration metric: took 352.505269ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630719   38636 pod_ready.go:38] duration metric: took 4.93610349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:30.630735   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 16:04:30.630784   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:04:30.645666   38636 api_server.go:71] duration metric: took 5.436188155s to wait for apiserver process to appear ...
	I0906 16:04:30.645679   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 16:04:30.645686   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 16:04:30.651159   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 16:04:30.652511   38636 api_server.go:140] control plane version: v1.25.0
	I0906 16:04:30.652524   38636 api_server.go:130] duration metric: took 6.840548ms to wait for apiserver health ...
	I0906 16:04:30.652530   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:04:30.833833   38636 system_pods.go:59] 9 kube-system pods found
	I0906 16:04:30.833849   38636 system_pods.go:61] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:30.833853   38636 system_pods.go:61] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:30.833859   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:30.833862   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:30.833867   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:30.833872   38636 system_pods.go:61] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:30.833878   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:30.833885   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:30.833893   38636 system_pods.go:61] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:30.833900   38636 system_pods.go:74] duration metric: took 181.366286ms to wait for pod list to return data ...
	I0906 16:04:30.833906   38636 default_sa.go:34] waiting for default service account to be created ...
	I0906 16:04:31.030564   38636 default_sa.go:45] found service account: "default"
	I0906 16:04:31.030579   38636 default_sa.go:55] duration metric: took 196.655364ms for default service account to be created ...
	I0906 16:04:31.030585   38636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 16:04:31.234390   38636 system_pods.go:86] 9 kube-system pods found
	I0906 16:04:31.234405   38636 system_pods.go:89] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:31.234410   38636 system_pods.go:89] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:31.234413   38636 system_pods.go:89] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:31.234417   38636 system_pods.go:89] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:31.234427   38636 system_pods.go:89] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:31.234434   38636 system_pods.go:89] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:31.234438   38636 system_pods.go:89] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:31.234445   38636 system_pods.go:89] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:31.234449   38636 system_pods.go:89] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:31.234455   38636 system_pods.go:126] duration metric: took 203.86794ms to wait for k8s-apps to be running ...
	I0906 16:04:31.234461   38636 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 16:04:31.234511   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:31.244461   38636 system_svc.go:56] duration metric: took 9.993449ms WaitForService to wait for kubelet.
	I0906 16:04:31.244474   38636 kubeadm.go:573] duration metric: took 6.035000594s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 16:04:31.244487   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:04:31.430989   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 16:04:31.431001   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 16:04:31.431008   38636 node_conditions.go:105] duration metric: took 186.51865ms to run NodePressure ...
	I0906 16:04:31.431017   38636 start.go:216] waiting for startup goroutines ...
	I0906 16:04:31.467536   38636 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 16:04:31.509529   38636 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220906155821-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 23:05:10 UTC. --
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.528204599Z" level=info msg="Processing signal 'terminated'"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529151410Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529777222Z" level=info msg="Daemon shutdown complete"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: docker.service: Succeeded.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.588828648Z" level=info msg="Starting up"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590571788Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590605888Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590631004Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590641853Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591550398Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591603148Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591645967Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591685874Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.595222522Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.599079518Z" level=info msg="Loading containers: start."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.676228835Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.708132289Z" level=info msg="Loading containers: done."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716192633Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716331649Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.738785771Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.741578122Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-09-06T23:05:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:05:13 up  1:21,  0 users,  load average: 1.50, 1.10, 1.05
	Linux old-k8s-version-20220906154143-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 23:05:13 UTC. --
	Sep 06 23:05:11 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: I0906 23:05:12.679196   24463 server.go:410] Version: v1.16.0
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: I0906 23:05:12.680050   24463 plugins.go:100] No cloud provider specified.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: I0906 23:05:12.680064   24463 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: I0906 23:05:12.681698   24463 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: W0906 23:05:12.682359   24463 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: W0906 23:05:12.682431   24463 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 kubelet[24463]: F0906 23:05:12.682453   24463 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 23:05:12 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: I0906 23:05:13.428300   24498 server.go:410] Version: v1.16.0
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: I0906 23:05:13.428544   24498 plugins.go:100] No cloud provider specified.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: I0906 23:05:13.428556   24498 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: I0906 23:05:13.430336   24498 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: W0906 23:05:13.430986   24498 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: W0906 23:05:13.431064   24498 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 kubelet[24498]: F0906 23:05:13.431089   24498 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 23:05:13 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 16:05:13.161594   39180 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (419.343911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220906154143-22187" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (46.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220906155618-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187: exit status 2 (16.07835014s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187: exit status 2 (16.076338323s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220906155618-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220906155618-22187
helpers_test.go:235: (dbg) docker inspect newest-cni-20220906155618-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5",
	        "Created": "2022-09-06T22:56:24.751095531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296379,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:57:13.859707329Z",
	            "FinishedAt": "2022-09-06T22:57:11.950390798Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/hosts",
	        "LogPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5-json.log",
	        "Name": "/newest-cni-20220906155618-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220906155618-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220906155618-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220906155618-22187",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220906155618-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220906155618-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220906155618-22187",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220906155618-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c33081d9f576e40fd633fa401d30cbcdbbe1dab5fcb0fb5797bba48220266681",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59964"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59965"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59966"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59967"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59968"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c33081d9f576",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220906155618-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "13042027b96b",
	                        "newest-cni-20220906155618-22187"
	                    ],
	                    "NetworkID": "283a9c52fde1270ef4c155872d705fb55e9f549a9a923e6a0c14e83559ebb8e6",
	                    "EndpointID": "c3b4704b7fbbf593cd8bacf42b3310256d317dfdecea4cfa83411f1ba7ce0d5e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220906155618-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220906155618-22187 logs -n 25: (4.433467617s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:57:12
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:57:12.639145   38061 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:57:12.639316   38061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:57:12.639321   38061 out.go:309] Setting ErrFile to fd 2...
	I0906 15:57:12.639324   38061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:57:12.639414   38061 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:57:12.639877   38061 out.go:303] Setting JSON to false
	I0906 15:57:12.654688   38061 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10603,"bootTime":1662494429,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:57:12.654777   38061 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:57:12.676421   38061 out.go:177] * [newest-cni-20220906155618-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:57:12.698442   38061 notify.go:193] Checking for updates...
	I0906 15:57:12.720426   38061 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:57:12.742512   38061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:12.764488   38061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:57:12.788008   38061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:57:12.811102   38061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:57:12.832908   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:12.833426   38061 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:57:12.901134   38061 docker.go:137] docker version: linux-20.10.17
	I0906 15:57:12.901284   38061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:57:13.032454   38061 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:57:12.96388528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:57:13.075840   38061 out.go:177] * Using the docker driver based on existing profile
	I0906 15:57:13.097333   38061 start.go:284] selected driver: docker
	I0906 15:57:13.097357   38061 start.go:808] validating driver "docker" against &{Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:13.097525   38061 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:57:13.100279   38061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:57:13.229790   38061 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:57:13.162574986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:57:13.229955   38061 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 15:57:13.229971   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:13.229982   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:13.229994   38061 start_flags.go:310] config:
	{Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:13.251979   38061 out.go:177] * Starting control plane node newest-cni-20220906155618-22187 in cluster newest-cni-20220906155618-22187
	I0906 15:57:13.273520   38061 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:57:13.294552   38061 out.go:177] * Pulling base image ...
	I0906 15:57:13.336322   38061 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:57:13.336368   38061 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:57:13.336383   38061 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:57:13.336395   38061 cache.go:57] Caching tarball of preloaded images
	I0906 15:57:13.336500   38061 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:57:13.336510   38061 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:57:13.336988   38061 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/config.json ...
	I0906 15:57:13.398266   38061 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:57:13.398281   38061 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:57:13.398290   38061 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:57:13.398331   38061 start.go:364] acquiring machines lock for newest-cni-20220906155618-22187: {Name:mk401549b6b19b3ef0eb6b86c2aa909990058f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:57:13.398408   38061 start.go:368] acquired machines lock for "newest-cni-20220906155618-22187" in 56.739µs
	I0906 15:57:13.398427   38061 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:57:13.398437   38061 fix.go:55] fixHost starting: 
	I0906 15:57:13.398729   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:13.461752   38061 fix.go:103] recreateIfNeeded on newest-cni-20220906155618-22187: state=Stopped err=<nil>
	W0906 15:57:13.461778   38061 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:57:13.505619   38061 out.go:177] * Restarting existing docker container for "newest-cni-20220906155618-22187" ...
	I0906 15:57:13.526339   38061 cli_runner.go:164] Run: docker start newest-cni-20220906155618-22187
	I0906 15:57:13.861499   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:13.927605   38061 kic.go:415] container "newest-cni-20220906155618-22187" state is running.
	I0906 15:57:13.928189   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:13.996522   38061 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/config.json ...
	I0906 15:57:13.996921   38061 machine.go:88] provisioning docker machine ...
	I0906 15:57:13.996962   38061 ubuntu.go:169] provisioning hostname "newest-cni-20220906155618-22187"
	I0906 15:57:13.997026   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.063082   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.063292   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.063306   38061 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220906155618-22187 && echo "newest-cni-20220906155618-22187" | sudo tee /etc/hostname
	I0906 15:57:14.194713   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220906155618-22187
	
	I0906 15:57:14.194799   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.259875   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.260050   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.260074   38061 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220906155618-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220906155618-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220906155618-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:57:14.371719   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:57:14.371739   38061 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:57:14.371758   38061 ubuntu.go:177] setting up certificates
	I0906 15:57:14.371769   38061 provision.go:83] configureAuth start
	I0906 15:57:14.371834   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:14.437178   38061 provision.go:138] copyHostCerts
	I0906 15:57:14.437283   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:57:14.437293   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:57:14.437378   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:57:14.437595   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:57:14.437609   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:57:14.437680   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:57:14.437826   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:57:14.437832   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:57:14.437887   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:57:14.438004   38061 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220906155618-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220906155618-22187]
	I0906 15:57:14.614910   38061 provision.go:172] copyRemoteCerts
	I0906 15:57:14.614995   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:57:14.615046   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.680969   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:14.761991   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:57:14.780722   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:57:14.798789   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:57:14.815599   38061 provision.go:86] duration metric: configureAuth took 443.812834ms
	I0906 15:57:14.815614   38061 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:57:14.815776   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:14.815832   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.879897   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.880051   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.880063   38061 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:57:14.990002   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:57:14.990015   38061 ubuntu.go:71] root file system type: overlay
	I0906 15:57:14.990221   38061 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:57:14.990313   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.053981   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:15.054131   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:15.054188   38061 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:57:15.173937   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:57:15.174034   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.239651   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:15.239858   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:15.239871   38061 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:57:15.354422   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:57:15.354438   38061 machine.go:91] provisioned docker machine in 1.357503312s
	I0906 15:57:15.354448   38061 start.go:300] post-start starting for "newest-cni-20220906155618-22187" (driver="docker")
	I0906 15:57:15.354453   38061 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:57:15.354523   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:57:15.354571   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.418343   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.508238   38061 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:57:15.511748   38061 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:57:15.511764   38061 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:57:15.511777   38061 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:57:15.511782   38061 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:57:15.511790   38061 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:57:15.511892   38061 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:57:15.512036   38061 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:57:15.512186   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:57:15.519457   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:57:15.536583   38061 start.go:303] post-start completed in 182.104615ms
	I0906 15:57:15.536646   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:57:15.536698   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.599998   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.683479   38061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:57:15.688515   38061 fix.go:57] fixHost completed within 2.290072228s
	I0906 15:57:15.688531   38061 start.go:83] releasing machines lock for "newest-cni-20220906155618-22187", held for 2.290109657s
	I0906 15:57:15.688600   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:15.751646   38061 ssh_runner.go:195] Run: systemctl --version
	I0906 15:57:15.751674   38061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:57:15.751728   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.751731   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.820773   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.820930   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.949103   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:57:15.956064   38061 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0906 15:57:15.968929   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.034020   38061 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:57:16.113607   38061 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:57:16.125973   38061 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:57:16.126029   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:57:16.135332   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:57:16.148936   38061 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:57:16.212981   38061 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:57:16.288178   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.354271   38061 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:57:16.583063   38061 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:57:16.654708   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.712294   38061 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:57:16.721743   38061 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:57:16.721814   38061 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:57:16.725541   38061 start.go:471] Will wait 60s for crictl version
	I0906 15:57:16.725592   38061 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:57:16.756190   38061 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:57:16.756258   38061 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:57:16.791822   38061 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:57:16.873552   38061 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:57:16.873767   38061 cli_runner.go:164] Run: docker exec -t newest-cni-20220906155618-22187 dig +short host.docker.internal
	I0906 15:57:16.987929   38061 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:57:16.988024   38061 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:57:16.992393   38061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:57:17.002087   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:17.088175   38061 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0906 15:57:17.110772   38061 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:57:17.110912   38061 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:57:17.142970   38061 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:57:17.142988   38061 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:57:17.143063   38061 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:57:17.173228   38061 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:57:17.173248   38061 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:57:17.173317   38061 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:57:17.245598   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:17.245613   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:17.245628   38061 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0906 15:57:17.245646   38061 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220906155618-22187 NodeName:newest-cni-20220906155618-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:57:17.245779   38061 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220906155618-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:57:17.245858   38061 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220906155618-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:57:17.245919   38061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:57:17.253199   38061 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:57:17.253256   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:57:17.260080   38061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0906 15:57:17.272307   38061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:57:17.284394   38061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0906 15:57:17.296588   38061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:57:17.300356   38061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:57:17.309351   38061 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187 for IP: 192.168.76.2
	I0906 15:57:17.309460   38061 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:57:17.309510   38061 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:57:17.309588   38061 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/client.key
	I0906 15:57:17.309657   38061 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.key.31bdca25
	I0906 15:57:17.309707   38061 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.key
	I0906 15:57:17.309917   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:57:17.309954   38061 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:57:17.309964   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:57:17.310003   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:57:17.310037   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:57:17.310067   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:57:17.310133   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:57:17.310709   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:57:17.327210   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:57:17.343660   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:57:17.360107   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:57:17.377076   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:57:17.393612   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:57:17.410150   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:57:17.427412   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:57:17.444174   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:57:17.461431   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:57:17.478714   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:57:17.496736   38061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:57:17.509660   38061 ssh_runner.go:195] Run: openssl version
	I0906 15:57:17.515007   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:57:17.522748   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.526709   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.526758   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.532718   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:57:17.541924   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:57:17.549880   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.553907   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.553943   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.559231   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:57:17.566305   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:57:17.573930   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.577911   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.577954   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.583299   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:57:17.590617   38061 kubeadm.go:396] StartCluster: {Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:17.590723   38061 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:57:17.619280   38061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:57:17.626780   38061 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:57:17.626797   38061 kubeadm.go:627] restartCluster start
	I0906 15:57:17.626850   38061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:57:17.633489   38061 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:17.633545   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:17.698328   38061 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220906155618-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:17.698493   38061 kubeconfig.go:127] "newest-cni-20220906155618-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:57:17.698808   38061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:17.699963   38061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:57:17.707550   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:17.707605   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:17.715574   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:17.917703   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:17.917865   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:17.927987   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.116188   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.116288   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.126409   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.315825   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.315906   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.324874   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.517798   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.517894   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.528617   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.717727   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.717892   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.728294   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.916619   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.916745   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.926897   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.116912   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.117070   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.127484   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.317704   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.317846   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.327947   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.516965   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.517098   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.527389   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.716964   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.717020   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.726042   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.915724   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.915814   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.924966   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.117734   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.117855   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.128595   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.317016   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.317146   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.326565   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.516591   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.516690   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.525551   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.717701   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.717839   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.728487   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.728497   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.728540   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.736269   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.736290   38061 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:57:20.736297   38061 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:57:20.736350   38061 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:57:20.766936   38061 docker.go:443] Stopping containers: [7e7ed90462fd b1ab90fe8437 02d6a781ff5f 4fcd0402fca9 645033b4c788 c8d7c56c2733 242860a8fe6d e009409cdecf 8e29e63a55f0 69bf13ac53df ca51b97bb84e 301a957fc9dc 4332abe62e59 15335225539e 4a180dfbd719 4d6c24833701 d8ce255eb4c2]
	I0906 15:57:20.767007   38061 ssh_runner.go:195] Run: docker stop 7e7ed90462fd b1ab90fe8437 02d6a781ff5f 4fcd0402fca9 645033b4c788 c8d7c56c2733 242860a8fe6d e009409cdecf 8e29e63a55f0 69bf13ac53df ca51b97bb84e 301a957fc9dc 4332abe62e59 15335225539e 4a180dfbd719 4d6c24833701 d8ce255eb4c2
	I0906 15:57:20.796405   38061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:57:20.806165   38061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:57:20.813733   38061 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep  6 22:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:56 /etc/kubernetes/scheduler.conf
	
	I0906 15:57:20.813787   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:57:20.820907   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:57:20.827951   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:57:20.834797   38061 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.834845   38061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:57:20.841940   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:57:20.849076   38061 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.849121   38061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:57:20.855796   38061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:57:20.862900   38061 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:57:20.862913   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:20.907459   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:21.828843   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:21.955863   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:22.001994   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:22.063473   38061 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:57:22.063538   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:22.612353   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:23.111514   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:23.125136   38061 api_server.go:71] duration metric: took 1.061671301s to wait for apiserver process to appear ...
	I0906 15:57:23.125153   38061 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:57:23.125169   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:23.126463   38061 api_server.go:256] stopped: https://127.0.0.1:59968/healthz: Get "https://127.0.0.1:59968/healthz": EOF
	I0906 15:57:23.627256   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:26.490286   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:57:26.490307   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:57:26.627254   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:26.633634   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:57:26.633655   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:57:27.127381   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:27.133087   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:57:27.133100   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:57:27.626768   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:27.633445   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 200:
	ok
	I0906 15:57:27.640047   38061 api_server.go:140] control plane version: v1.25.0
	I0906 15:57:27.661922   38061 api_server.go:130] duration metric: took 4.536744631s to wait for apiserver health ...
	I0906 15:57:27.661945   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:27.661953   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:27.661968   38061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:57:27.669902   38061 system_pods.go:59] 9 kube-system pods found
	I0906 15:57:27.669922   38061 system_pods.go:61] "coredns-565d847f94-v2v64" [0dcab01c-2e4d-41bd-97e4-83387719a08a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:27.669928   38061 system_pods.go:61] "coredns-565d847f94-x2zfb" [c5c20b4d-204f-40b3-bb69-76373b532e0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:27.669933   38061 system_pods.go:61] "etcd-newest-cni-20220906155618-22187" [3daa576c-6b52-466a-a8d2-932e43340be3] Running
	I0906 15:57:27.669937   38061 system_pods.go:61] "kube-apiserver-newest-cni-20220906155618-22187" [7628f52b-b2df-4108-8a5b-a7be87bfcda6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:57:27.669941   38061 system_pods.go:61] "kube-controller-manager-newest-cni-20220906155618-22187" [a99f3285-f6c3-45ea-b605-169d6d139284] Running
	I0906 15:57:27.669945   38061 system_pods.go:61] "kube-proxy-c95tp" [58270e0b-3dbc-41bd-9301-6b57a78cb575] Running
	I0906 15:57:27.669951   38061 system_pods.go:61] "kube-scheduler-newest-cni-20220906155618-22187" [01916a12-c775-42b2-9eed-a4f2154502ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:57:27.669958   38061 system_pods.go:61] "metrics-server-5c8fd5cf8-sc5m5" [4f43d946-257b-4406-bad0-1a500e75d1fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:57:27.669964   38061 system_pods.go:61] "storage-provisioner" [4d676f03-34fa-46c9-8d96-bff836c74d3d] Running
	I0906 15:57:27.669973   38061 system_pods.go:74] duration metric: took 7.995857ms to wait for pod list to return data ...
	I0906 15:57:27.669982   38061 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:57:27.673950   38061 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:57:27.673965   38061 node_conditions.go:123] node cpu capacity is 6
	I0906 15:57:27.673976   38061 node_conditions.go:105] duration metric: took 3.983529ms to run NodePressure ...
	I0906 15:57:27.673990   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:27.906964   38061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:57:27.916175   38061 ops.go:34] apiserver oom_adj: -16
	I0906 15:57:27.916187   38061 kubeadm.go:631] restartCluster took 10.2893565s
	I0906 15:57:27.916195   38061 kubeadm.go:398] StartCluster complete in 10.32555533s
	I0906 15:57:27.916215   38061 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:27.916305   38061 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:27.918038   38061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:27.921275   38061 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220906155618-22187" rescaled to 1
	I0906 15:57:27.921320   38061 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:57:27.921352   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:57:27.960679   38061 out.go:177] * Verifying Kubernetes components...
	I0906 15:57:27.921363   38061 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:57:27.921528   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:28.018705   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:57:28.018715   38061 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018719   38061 addons.go:65] Setting dashboard=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018744   38061 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018758   38061 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018769   38061 addons.go:153] Setting addon dashboard=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018769   38061 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018725   38061 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220906155618-22187"
	W0906 15:57:28.018775   38061 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:57:28.018797   38061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220906155618-22187"
	W0906 15:57:28.018777   38061 addons.go:162] addon dashboard should already be in state true
	I0906 15:57:28.018864   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.018895   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	W0906 15:57:28.018782   38061 addons.go:162] addon metrics-server should already be in state true
	I0906 15:57:28.018985   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.019213   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.019277   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.019716   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.020396   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.125365   38061 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.184608   38061 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	W0906 15:57:28.184636   38061 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:57:28.130104   38061 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:57:28.130153   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.143458   38061 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:57:28.163258   38061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:57:28.184686   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.208057   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.285485   38061 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:57:28.227773   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:57:28.264708   38061 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:57:28.306434   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:57:28.306465   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:57:28.306524   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:57:28.306539   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:57:28.306567   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.306572   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.306615   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.318662   38061 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:57:28.318783   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:28.338487   38061 api_server.go:71] duration metric: took 417.140277ms to wait for apiserver process to appear ...
	I0906 15:57:28.338507   38061 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:57:28.338551   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:28.349284   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 200:
	ok
	I0906 15:57:28.352422   38061 api_server.go:140] control plane version: v1.25.0
	I0906 15:57:28.352441   38061 api_server.go:130] duration metric: took 13.927721ms to wait for apiserver health ...
	I0906 15:57:28.352449   38061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:57:28.362871   38061 system_pods.go:59] 9 kube-system pods found
	I0906 15:57:28.362892   38061 system_pods.go:61] "coredns-565d847f94-v2v64" [0dcab01c-2e4d-41bd-97e4-83387719a08a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:28.362900   38061 system_pods.go:61] "coredns-565d847f94-x2zfb" [c5c20b4d-204f-40b3-bb69-76373b532e0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:28.362909   38061 system_pods.go:61] "etcd-newest-cni-20220906155618-22187" [3daa576c-6b52-466a-a8d2-932e43340be3] Running
	I0906 15:57:28.362922   38061 system_pods.go:61] "kube-apiserver-newest-cni-20220906155618-22187" [7628f52b-b2df-4108-8a5b-a7be87bfcda6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:57:28.362930   38061 system_pods.go:61] "kube-controller-manager-newest-cni-20220906155618-22187" [a99f3285-f6c3-45ea-b605-169d6d139284] Running
	I0906 15:57:28.362938   38061 system_pods.go:61] "kube-proxy-c95tp" [58270e0b-3dbc-41bd-9301-6b57a78cb575] Running
	I0906 15:57:28.362947   38061 system_pods.go:61] "kube-scheduler-newest-cni-20220906155618-22187" [01916a12-c775-42b2-9eed-a4f2154502ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:57:28.362957   38061 system_pods.go:61] "metrics-server-5c8fd5cf8-sc5m5" [4f43d946-257b-4406-bad0-1a500e75d1fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:57:28.362969   38061 system_pods.go:61] "storage-provisioner" [4d676f03-34fa-46c9-8d96-bff836c74d3d] Running
	I0906 15:57:28.362974   38061 system_pods.go:74] duration metric: took 10.521305ms to wait for pod list to return data ...
	I0906 15:57:28.362981   38061 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:57:28.376559   38061 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:57:28.376571   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:57:28.376623   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.400382   38061 default_sa.go:45] found service account: "default"
	I0906 15:57:28.400411   38061 default_sa.go:55] duration metric: took 37.424247ms for default service account to be created ...
	I0906 15:57:28.400424   38061 kubeadm.go:573] duration metric: took 479.078431ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0906 15:57:28.400443   38061 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:57:28.403555   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.405108   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.405296   38061 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:57:28.405313   38061 node_conditions.go:123] node cpu capacity is 6
	I0906 15:57:28.405333   38061 node_conditions.go:105] duration metric: took 4.881031ms to run NodePressure ...
	I0906 15:57:28.405345   38061 start.go:216] waiting for startup goroutines ...
	I0906 15:57:28.407397   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.457981   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.554518   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:57:28.554533   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:57:28.611392   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:57:28.616594   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:57:28.619084   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:57:28.619108   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:57:28.622435   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:57:28.622448   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:57:28.697389   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:57:28.697406   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:57:28.714525   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:57:28.714539   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:57:28.721801   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:57:28.721813   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:57:28.734650   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:57:28.734662   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:57:28.800327   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:57:28.802754   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:57:28.802765   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:57:28.904031   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:57:28.904044   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:57:28.919693   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:57:28.919709   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:57:28.937321   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:57:28.937335   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:57:28.952399   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:57:28.952412   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:57:29.009410   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:57:29.520550   38061 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220906155618-22187"
	I0906 15:57:29.612280   38061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 15:57:29.686985   38061 addons.go:414] enableAddons completed in 1.765634263s
	I0906 15:57:29.721418   38061 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:57:29.742936   38061 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220906155618-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:57:14 UTC, end at Tue 2022-09-06 22:58:06 UTC. --
	Sep 06 22:57:16 newest-cni-20220906155618-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.425682756Z" level=info msg="Starting up"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.427330483Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.427364168Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.427379857Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.427388757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.428393010Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.428425562Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.428448018Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.428455427Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.432225433Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.437374188Z" level=info msg="Loading containers: start."
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.529165893Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.560894323Z" level=info msg="Loading containers: done."
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.570486576Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.570551060Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.594534107Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.601386079Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:57:28 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:28.438391740Z" level=info msg="ignoring event" container=a98e94475dc5fd2cbb13d1f45486c45139a22af69f7bf8b129948a825c479087 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:28 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:28.616203273Z" level=info msg="ignoring event" container=67916b6f42b8223be3461e04b4ca4e3b88976267aa4374cf2208942834092647 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:30 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:30.185900211Z" level=info msg="ignoring event" container=f1489789da494fd02a1a903ada0bce076f9fdc4d973d575c758dbad3154452a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:30 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:30.192573493Z" level=info msg="ignoring event" container=cca4a7193d04f316e168f959f6f1bcf62053b82ae344765470f8ff4c84675834 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:31 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:31.132669196Z" level=info msg="ignoring event" container=bc0c58d4e8074f6af48f4178bfa0a7cdfad56484347a0891190360d6d8a3f9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:31 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:31.163294826Z" level=info msg="ignoring event" container=ab189abb8cb941f06e281704c883889b451667428121c4df0e19b2a1607f53c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7905db8500b06       6e38f40d628db       38 seconds ago       Running             storage-provisioner       1                   f2a36428b0ad7
	5145ef6106ee4       58a9a0c6d96f2       39 seconds ago       Running             kube-proxy                1                   74836d866bfe2
	dd0bd3030ab1a       a8a176a5d5d69       44 seconds ago       Running             etcd                      1                   83e31d1500b8a
	8a9751ebcecee       1a54c86c03a67       44 seconds ago       Running             kube-controller-manager   1                   4ac8fcb993a9f
	5ebff905e818d       bef2cf3115095       44 seconds ago       Running             kube-scheduler            1                   61380dd926407
	c00df6994e9d8       4d2edfd10d3e3       44 seconds ago       Running             kube-apiserver            1                   3ef124b7f4bfc
	02d6a781ff5fb       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   4fcd0402fca9a
	c8d7c56c27339       58a9a0c6d96f2       About a minute ago   Exited              kube-proxy                0                   e009409cdecf0
	69bf13ac53df3       a8a176a5d5d69       About a minute ago   Exited              etcd                      0                   15335225539e2
	ca51b97bb84ea       bef2cf3115095       About a minute ago   Exited              kube-scheduler            0                   4d6c248337013
	301a957fc9dc9       4d2edfd10d3e3       About a minute ago   Exited              kube-apiserver            0                   d8ce255eb4c23
	4332abe62e595       1a54c86c03a67       About a minute ago   Exited              kube-controller-manager   0                   4a180dfbd7191
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220906155618-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220906155618-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=newest-cni-20220906155618-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_56_44_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:56:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220906155618-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:58:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:58:04 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220906155618-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                bf14c125-9485-443b-a7ea-21aee3a246d8
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-v2v64                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-newest-cni-20220906155618-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-newest-cni-20220906155618-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-newest-cni-20220906155618-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-c95tp                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-newest-cni-20220906155618-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 metrics-server-5c8fd5cf8-sc5m5                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-l4lmd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-cfznr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeReady
	  Normal  RegisteredNode           71s                node-controller  Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller
	  Normal  NodeHasSufficientMemory  45s (x5 over 45s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    45s (x5 over 45s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x4 over 45s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             3s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [69bf13ac53df] <==
	* {"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220906155618-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:56:39.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:56:39.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:00.273Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:57:00.273Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220906155618-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/09/06 22:57:00 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:57:00 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:57:00.339Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-09-06T22:57:00.341Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:00.342Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:00.342Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220906155618-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [dd0bd3030ab1] <==
	* {"level":"info","ts":"2022-09-06T22:57:23.311Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:57:23.313Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:23.313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:23.315Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:57:23.315Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220906155618-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:57:24.958Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:57:24.958Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:58:07 up  1:14,  0 users,  load average: 1.04, 1.00, 1.01
	Linux newest-cni-20220906155618-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [301a957fc9dc] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.241969       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.252993       1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.310962       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [c00df6994e9d] <==
	* I0906 22:57:26.604683       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:57:26.604750       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:57:26.609464       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:57:26.614384       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:57:26.624224       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:57:27.311452       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:57:27.495688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 22:57:27.621570       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:57:27.621605       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:57:27.621611       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 22:57:27.621620       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:57:27.621652       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:57:27.622626       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 22:57:27.817311       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:57:27.826493       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:57:27.850939       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:57:27.865639       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:57:27.870454       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:57:29.520120       1 controller.go:616] quota admission added evaluator for: namespaces
	I0906 22:57:29.588937       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.73.55]
	I0906 22:57:29.597123       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.63.137]
	I0906 22:58:04.196875       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:58:04.396812       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:58:04.511386       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [4332abe62e59] <==
	* I0906 22:56:56.393549       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:56:56.393592       1 shared_informer.go:262] Caches are synced for node
	I0906 22:56:56.393607       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:56:56.393610       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:56:56.393614       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:56:56.397592       1 range_allocator.go:367] Set node newest-cni-20220906155618-22187 PodCIDR to [192.168.0.0/24]
	I0906 22:56:56.438014       1 shared_informer.go:262] Caches are synced for endpoint
	I0906 22:56:56.439732       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:56:56.452503       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:56:56.478482       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0906 22:56:56.491419       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:56:56.522178       1 shared_informer.go:262] Caches are synced for job
	I0906 22:56:56.538440       1 shared_informer.go:262] Caches are synced for cronjob
	I0906 22:56:56.542805       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:56:56.906361       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:56:56.987304       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:56:56.987322       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:56:57.193835       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0906 22:56:57.205687       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0906 22:56:57.245460       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c95tp"
	I0906 22:56:57.391954       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-x2zfb"
	I0906 22:56:57.404827       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-v2v64"
	I0906 22:56:57.421129       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-x2zfb"
	I0906 22:56:59.646459       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:56:59.651715       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-sc5m5"
	
	* 
	* ==> kube-controller-manager [8a9751ebcece] <==
	* I0906 22:58:04.416747       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-l4lmd"
	W0906 22:58:04.430048       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="newest-cni-20220906155618-22187" does not exist
	I0906 22:58:04.430699       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:58:04.430766       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:58:04.438062       1 shared_informer.go:262] Caches are synced for GC
	I0906 22:58:04.439414       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:58:04.446850       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:58:04.461330       1 shared_informer.go:262] Caches are synced for disruption
	I0906 22:58:04.461427       1 shared_informer.go:262] Caches are synced for TTL
	I0906 22:58:04.490235       1 shared_informer.go:262] Caches are synced for taint
	I0906 22:58:04.490582       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0906 22:58:04.490599       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0906 22:58:04.490664       1 taint_manager.go:209] "Sending events to api server"
	I0906 22:58:04.490767       1 event.go:294] "Event occurred" object="newest-cni-20220906155618-22187" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller"
	W0906 22:58:04.490689       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-20220906155618-22187. Assuming now as a timestamp.
	I0906 22:58:04.490827       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0906 22:58:04.495307       1 shared_informer.go:262] Caches are synced for node
	I0906 22:58:04.495343       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:58:04.495347       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:58:04.495353       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:58:04.499049       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 22:58:04.504815       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 22:58:04.918990       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:58:04.919006       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:58:04.922268       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [5145ef6106ee] <==
	* I0906 22:57:27.997479       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:57:27.997531       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:57:27.997548       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:57:28.062388       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:57:28.062497       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:57:28.062514       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:57:28.062532       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:57:28.062578       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:57:28.062762       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:57:28.063088       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:57:28.063142       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:57:28.065527       1 config.go:317] "Starting service config controller"
	I0906 22:57:28.065549       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:57:28.065569       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:57:28.065572       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:57:28.066349       1 config.go:444] "Starting node config controller"
	I0906 22:57:28.066355       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:57:28.166355       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:57:28.166382       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:57:28.166452       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c8d7c56c2733] <==
	* I0906 22:56:58.247572       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:56:58.247651       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:56:58.247687       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:56:58.271190       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:56:58.271232       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:56:58.271241       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:56:58.271252       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:56:58.271266       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:56:58.271492       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:56:58.271825       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:56:58.271852       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:56:58.272314       1 config.go:317] "Starting service config controller"
	I0906 22:56:58.272348       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:56:58.272361       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:56:58.272364       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:56:58.274663       1 config.go:444] "Starting node config controller"
	I0906 22:56:58.274692       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:56:58.372676       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:56:58.372804       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:56:58.374778       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [5ebff905e818] <==
	* W0906 22:57:23.238073       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0906 22:57:23.918702       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:57:26.515558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:57:26.515756       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:57:26.515851       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:57:26.515926       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:57:26.534185       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:57:26.534371       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:57:26.536123       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:57:26.536208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:57:26.536661       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:57:26.536235       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:57:26.638031       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ca51b97bb84e] <==
	* W0906 22:56:41.555709       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 22:56:41.555743       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:56:41.555742       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 22:56:41.555813       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 22:56:41.555941       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:56:41.555970       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:56:42.431338       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:56:42.431418       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:56:42.437187       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.437241       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.472021       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.472074       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.492956       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:56:42.493011       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:56:42.591470       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:56:42.591515       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:56:42.652609       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.652656       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.666757       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:56:42.666796       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0906 22:56:45.638951       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:57:00.272767       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 22:57:00.272804       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 22:57:00.273000       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0906 22:57:00.273084       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:57:14 UTC, end at Tue 2022-09-06 22:58:09 UTC. --
	Sep 06 22:58:08 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:08 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:08 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-cfznr"
	Sep 06 22:58:08 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:08.849903    3624 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-54596f475f-cfznr_kubernetes-dashboard(fb265a48-200e-44f1-a3ac-b58f85045487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-54596f475f-cfznr_kubernetes-dashboard(fb265a48-200e-44f1-a3ac-b58f85045487)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"cffa86fd598579c65aeed5a8de25fc79cb666e6f4c9331bb1d7d495c9409fdc0\\\" network for pod \\\"kubernetes-dashboard-54596f475f-cfznr\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-54596f475f-cfznr_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"cffa86fd598579c65aeed5a8de25fc79cb666e6f4c9331bb1d7d495c9409fdc0\\\" network for pod \\\"kubernetes-dashboard-545
96f475f-cfznr\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-54596f475f-cfznr_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-9076da4bf1a2d26de40e2e35 -m comment --comment name: \\\"crio\\\" id: \\\"cffa86fd598579c65aeed5a8de25fc79cb666e6f4c9331bb1d7d495c9409fdc0\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-9076da4bf1a2d26de40e2e35':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-cfznr" podUID=fb265a48-200e-44f1-a3ac-b58f85045487
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:09.298689    3624 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err=<
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-eee37c10a4f5a1c112f83a4c -m comment --comment name: "crio" id: "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-eee37c10a4f5a1c112f83a4c':No such file or directory
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:  >
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:09.298751    3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-eee37c10a4f5a1c112f83a4c -m comment --comment name: "crio" id: "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-eee37c10a4f5a1c112f83a4c':No such file or directory
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kube-system/metrics-server-5c8fd5cf8-sc5m5"
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:09.298770    3624 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" network for pod "metrics-server-5c8fd5cf8-sc5m5": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-sc5m5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-eee37c10a4f5a1c112f83a4c -m comment --comment name: "crio" id: "1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-eee37c10a4f5a1c112f83a4c':No such file or directory
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kube-system/metrics-server-5c8fd5cf8-sc5m5"
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:09.298853    3624 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c8fd5cf8-sc5m5_kube-system(4f43d946-257b-4406-bad0-1a500e75d1fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c8fd5cf8-sc5m5_kube-system(4f43d946-257b-4406-bad0-1a500e75d1fb)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1\\\" network for pod \\\"metrics-server-5c8fd5cf8-sc5m5\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c8fd5cf8-sc5m5_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1\\\" network for pod \\\"metrics-server-5c8fd5cf8-sc5m5\\\": networkPlugin cni failed to teardown pod \\
\"metrics-server-5c8fd5cf8-sc5m5_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-eee37c10a4f5a1c112f83a4c -m comment --comment name: \\\"crio\\\" id: \\\"1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-eee37c10a4f5a1c112f83a4c':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c8fd5cf8-sc5m5" podUID=4f43d946-257b-4406-bad0-1a500e75d1fb
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:09.402576    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1"
	Sep 06 22:58:09 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:09.406484    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cffa86fd598579c65aeed5a8de25fc79cb666e6f4c9331bb1d7d495c9409fdc0"
	
	* 
	* ==> storage-provisioner [02d6a781ff5f] <==
	* I0906 22:56:59.402011       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:56:59.409781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:56:59.409861       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:56:59.414294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:56:59.414482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0!
	I0906 22:56:59.414451       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25cd2a16-7724-4497-b738-8b32316e2630", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0 became leader
	I0906 22:56:59.514667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0!
	
	* 
	* ==> storage-provisioner [7905db8500b0] <==
	* I0906 22:57:28.853423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:57:28.921020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:57:28.921314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:58:04.199365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:58:04.199549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5!
	I0906 22:58:04.202203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25cd2a16-7724-4497-b738-8b32316e2630", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5 became leader
	I0906 22:58:04.302220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220906155618-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr: exit status 1 (55.51803ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-v2v64" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-sc5m5" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-l4lmd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-cfznr" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220906155618-22187
helpers_test.go:235: (dbg) docker inspect newest-cni-20220906155618-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5",
	        "Created": "2022-09-06T22:56:24.751095531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296379,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:57:13.859707329Z",
	            "FinishedAt": "2022-09-06T22:57:11.950390798Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/hosts",
	        "LogPath": "/var/lib/docker/containers/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5/13042027b96bc3dd21ea87955e2cd82ae891577b7231acba05fbd646b52acdc5-json.log",
	        "Name": "/newest-cni-20220906155618-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220906155618-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220906155618-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02f6b321fc29f63d513d9f2c918841e5b758c9405eabe4647f5f5e017467f08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220906155618-22187",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220906155618-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220906155618-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220906155618-22187",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220906155618-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c33081d9f576e40fd633fa401d30cbcdbbe1dab5fcb0fb5797bba48220266681",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59964"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59965"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59966"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59967"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59968"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c33081d9f576",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220906155618-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "13042027b96b",
	                        "newest-cni-20220906155618-22187"
	                    ],
	                    "NetworkID": "283a9c52fde1270ef4c155872d705fb55e9f549a9a923e6a0c14e83559ebb8e6",
	                    "EndpointID": "c3b4704b7fbbf593cd8bacf42b3310256d317dfdecea4cfa83411f1ba7ce0d5e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220906155618-22187 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220906155618-22187 logs -n 25: (5.085361492s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT | 06 Sep 22 15:47 PDT |
	|         | old-k8s-version-20220906154143-22187                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220906154143-22187            | jenkins | v1.26.1 | 06 Sep 22 15:47 PDT |                     |
	|         | old-k8s-version-20220906154143-22187                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:48 PDT | 06 Sep 22 15:48 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220906154156-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:49 PDT |
	|         | no-preload-20220906154156-22187                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:49 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:50 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:57:12
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:57:12.639145   38061 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:57:12.639316   38061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:57:12.639321   38061 out.go:309] Setting ErrFile to fd 2...
	I0906 15:57:12.639324   38061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:57:12.639414   38061 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:57:12.639877   38061 out.go:303] Setting JSON to false
	I0906 15:57:12.654688   38061 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10603,"bootTime":1662494429,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:57:12.654777   38061 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:57:12.676421   38061 out.go:177] * [newest-cni-20220906155618-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:57:12.698442   38061 notify.go:193] Checking for updates...
	I0906 15:57:12.720426   38061 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:57:12.742512   38061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:12.764488   38061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:57:12.788008   38061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:57:12.811102   38061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:57:12.832908   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:12.833426   38061 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:57:12.901134   38061 docker.go:137] docker version: linux-20.10.17
	I0906 15:57:12.901284   38061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:57:13.032454   38061 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:57:12.96388528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:57:13.075840   38061 out.go:177] * Using the docker driver based on existing profile
	I0906 15:57:13.097333   38061 start.go:284] selected driver: docker
	I0906 15:57:13.097357   38061 start.go:808] validating driver "docker" against &{Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:13.097525   38061 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:57:13.100279   38061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:57:13.229790   38061 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:57:13.162574986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:57:13.229955   38061 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 15:57:13.229971   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:13.229982   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:13.229994   38061 start_flags.go:310] config:
	{Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:13.251979   38061 out.go:177] * Starting control plane node newest-cni-20220906155618-22187 in cluster newest-cni-20220906155618-22187
	I0906 15:57:13.273520   38061 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:57:13.294552   38061 out.go:177] * Pulling base image ...
	I0906 15:57:13.336322   38061 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:57:13.336368   38061 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:57:13.336383   38061 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:57:13.336395   38061 cache.go:57] Caching tarball of preloaded images
	I0906 15:57:13.336500   38061 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:57:13.336510   38061 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:57:13.336988   38061 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/config.json ...
	I0906 15:57:13.398266   38061 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:57:13.398281   38061 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:57:13.398290   38061 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:57:13.398331   38061 start.go:364] acquiring machines lock for newest-cni-20220906155618-22187: {Name:mk401549b6b19b3ef0eb6b86c2aa909990058f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:57:13.398408   38061 start.go:368] acquired machines lock for "newest-cni-20220906155618-22187" in 56.739µs
	I0906 15:57:13.398427   38061 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:57:13.398437   38061 fix.go:55] fixHost starting: 
	I0906 15:57:13.398729   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:13.461752   38061 fix.go:103] recreateIfNeeded on newest-cni-20220906155618-22187: state=Stopped err=<nil>
	W0906 15:57:13.461778   38061 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:57:13.505619   38061 out.go:177] * Restarting existing docker container for "newest-cni-20220906155618-22187" ...
	I0906 15:57:13.526339   38061 cli_runner.go:164] Run: docker start newest-cni-20220906155618-22187
	I0906 15:57:13.861499   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:13.927605   38061 kic.go:415] container "newest-cni-20220906155618-22187" state is running.
	I0906 15:57:13.928189   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:13.996522   38061 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/config.json ...
	I0906 15:57:13.996921   38061 machine.go:88] provisioning docker machine ...
	I0906 15:57:13.996962   38061 ubuntu.go:169] provisioning hostname "newest-cni-20220906155618-22187"
	I0906 15:57:13.997026   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.063082   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.063292   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.063306   38061 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220906155618-22187 && echo "newest-cni-20220906155618-22187" | sudo tee /etc/hostname
	I0906 15:57:14.194713   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220906155618-22187
	
	I0906 15:57:14.194799   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.259875   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.260050   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.260074   38061 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220906155618-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220906155618-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220906155618-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:57:14.371719   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:57:14.371739   38061 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:57:14.371758   38061 ubuntu.go:177] setting up certificates
	I0906 15:57:14.371769   38061 provision.go:83] configureAuth start
	I0906 15:57:14.371834   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:14.437178   38061 provision.go:138] copyHostCerts
	I0906 15:57:14.437283   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:57:14.437293   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:57:14.437378   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:57:14.437595   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:57:14.437609   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:57:14.437680   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:57:14.437826   38061 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:57:14.437832   38061 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:57:14.437887   38061 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:57:14.438004   38061 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220906155618-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220906155618-22187]
	I0906 15:57:14.614910   38061 provision.go:172] copyRemoteCerts
	I0906 15:57:14.614995   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:57:14.615046   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.680969   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:14.761991   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:57:14.780722   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0906 15:57:14.798789   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 15:57:14.815599   38061 provision.go:86] duration metric: configureAuth took 443.812834ms
	I0906 15:57:14.815614   38061 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:57:14.815776   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:14.815832   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:14.879897   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:14.880051   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:14.880063   38061 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:57:14.990002   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:57:14.990015   38061 ubuntu.go:71] root file system type: overlay
	I0906 15:57:14.990221   38061 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:57:14.990313   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.053981   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:15.054131   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:15.054188   38061 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:57:15.173937   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:57:15.174034   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.239651   38061 main.go:134] libmachine: Using SSH client type: native
	I0906 15:57:15.239858   38061 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 59964 <nil> <nil>}
	I0906 15:57:15.239871   38061 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:57:15.354422   38061 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:57:15.354438   38061 machine.go:91] provisioned docker machine in 1.357503312s
	I0906 15:57:15.354448   38061 start.go:300] post-start starting for "newest-cni-20220906155618-22187" (driver="docker")
	I0906 15:57:15.354453   38061 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:57:15.354523   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:57:15.354571   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.418343   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.508238   38061 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:57:15.511748   38061 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:57:15.511764   38061 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:57:15.511777   38061 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:57:15.511782   38061 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:57:15.511790   38061 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:57:15.511892   38061 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:57:15.512036   38061 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:57:15.512186   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:57:15.519457   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:57:15.536583   38061 start.go:303] post-start completed in 182.104615ms
	I0906 15:57:15.536646   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:57:15.536698   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.599998   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.683479   38061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:57:15.688515   38061 fix.go:57] fixHost completed within 2.290072228s
	I0906 15:57:15.688531   38061 start.go:83] releasing machines lock for "newest-cni-20220906155618-22187", held for 2.290109657s
	I0906 15:57:15.688600   38061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220906155618-22187
	I0906 15:57:15.751646   38061 ssh_runner.go:195] Run: systemctl --version
	I0906 15:57:15.751674   38061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:57:15.751728   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.751731   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:15.820773   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.820930   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:15.949103   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 15:57:15.956064   38061 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0906 15:57:15.968929   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.034020   38061 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 15:57:16.113607   38061 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:57:16.125973   38061 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:57:16.126029   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:57:16.135332   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:57:16.148936   38061 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:57:16.212981   38061 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:57:16.288178   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.354271   38061 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:57:16.583063   38061 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:57:16.654708   38061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:57:16.712294   38061 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:57:16.721743   38061 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:57:16.721814   38061 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:57:16.725541   38061 start.go:471] Will wait 60s for crictl version
	I0906 15:57:16.725592   38061 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:57:16.756190   38061 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:57:16.756258   38061 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:57:16.791822   38061 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:57:16.873552   38061 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:57:16.873767   38061 cli_runner.go:164] Run: docker exec -t newest-cni-20220906155618-22187 dig +short host.docker.internal
	I0906 15:57:16.987929   38061 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:57:16.988024   38061 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:57:16.992393   38061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:57:17.002087   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:17.088175   38061 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0906 15:57:17.110772   38061 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:57:17.110912   38061 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:57:17.142970   38061 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:57:17.142988   38061 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:57:17.143063   38061 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:57:17.173228   38061 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 15:57:17.173248   38061 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:57:17.173317   38061 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:57:17.245598   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:17.245613   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:17.245628   38061 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0906 15:57:17.245646   38061 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220906155618-22187 NodeName:newest-cni-20220906155618-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:57:17.245779   38061 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220906155618-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:57:17.245858   38061 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220906155618-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:57:17.245919   38061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:57:17.253199   38061 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:57:17.253256   38061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:57:17.260080   38061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0906 15:57:17.272307   38061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:57:17.284394   38061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0906 15:57:17.296588   38061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:57:17.300356   38061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:57:17.309351   38061 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187 for IP: 192.168.76.2
	I0906 15:57:17.309460   38061 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:57:17.309510   38061 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:57:17.309588   38061 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/client.key
	I0906 15:57:17.309657   38061 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.key.31bdca25
	I0906 15:57:17.309707   38061 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.key
	I0906 15:57:17.309917   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:57:17.309954   38061 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:57:17.309964   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:57:17.310003   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:57:17.310037   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:57:17.310067   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:57:17.310133   38061 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:57:17.310709   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:57:17.327210   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:57:17.343660   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:57:17.360107   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/newest-cni-20220906155618-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:57:17.377076   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:57:17.393612   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:57:17.410150   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:57:17.427412   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:57:17.444174   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:57:17.461431   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:57:17.478714   38061 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:57:17.496736   38061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:57:17.509660   38061 ssh_runner.go:195] Run: openssl version
	I0906 15:57:17.515007   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:57:17.522748   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.526709   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.526758   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:57:17.532718   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:57:17.541924   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:57:17.549880   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.553907   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.553943   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:57:17.559231   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:57:17.566305   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:57:17.573930   38061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.577911   38061 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.577954   38061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:57:17.583299   38061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:57:17.590617   38061 kubeadm.go:396] StartCluster: {Name:newest-cni-20220906155618-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:newest-cni-20220906155618-22187 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:57:17.590723   38061 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:57:17.619280   38061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:57:17.626780   38061 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:57:17.626797   38061 kubeadm.go:627] restartCluster start
	I0906 15:57:17.626850   38061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:57:17.633489   38061 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:17.633545   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:17.698328   38061 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220906155618-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:17.698493   38061 kubeconfig.go:127] "newest-cni-20220906155618-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:57:17.698808   38061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:17.699963   38061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:57:17.707550   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:17.707605   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:17.715574   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:17.917703   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:17.917865   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:17.927987   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.116188   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.116288   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.126409   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.315825   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.315906   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.324874   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.517798   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.517894   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.528617   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.717727   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.717892   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.728294   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:18.916619   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:18.916745   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:18.926897   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.116912   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.117070   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.127484   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.317704   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.317846   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.327947   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.516965   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.517098   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.527389   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.716964   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.717020   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.726042   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:19.915724   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:19.915814   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:19.924966   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.117734   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.117855   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.128595   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.317016   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.317146   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.326565   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.516591   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.516690   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.525551   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.717701   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.717839   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.728487   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.728497   38061 api_server.go:165] Checking apiserver status ...
	I0906 15:57:20.728540   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:57:20.736269   38061 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.736290   38061 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:57:20.736297   38061 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:57:20.736350   38061 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:57:20.766936   38061 docker.go:443] Stopping containers: [7e7ed90462fd b1ab90fe8437 02d6a781ff5f 4fcd0402fca9 645033b4c788 c8d7c56c2733 242860a8fe6d e009409cdecf 8e29e63a55f0 69bf13ac53df ca51b97bb84e 301a957fc9dc 4332abe62e59 15335225539e 4a180dfbd719 4d6c24833701 d8ce255eb4c2]
	I0906 15:57:20.767007   38061 ssh_runner.go:195] Run: docker stop 7e7ed90462fd b1ab90fe8437 02d6a781ff5f 4fcd0402fca9 645033b4c788 c8d7c56c2733 242860a8fe6d e009409cdecf 8e29e63a55f0 69bf13ac53df ca51b97bb84e 301a957fc9dc 4332abe62e59 15335225539e 4a180dfbd719 4d6c24833701 d8ce255eb4c2
	I0906 15:57:20.796405   38061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:57:20.806165   38061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:57:20.813733   38061 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep  6 22:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep  6 22:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep  6 22:56 /etc/kubernetes/scheduler.conf
	
	I0906 15:57:20.813787   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:57:20.820907   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:57:20.827951   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:57:20.834797   38061 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.834845   38061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:57:20.841940   38061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:57:20.849076   38061 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:57:20.849121   38061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:57:20.855796   38061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:57:20.862900   38061 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:57:20.862913   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:20.907459   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:21.828843   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:21.955863   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:22.001994   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:22.063473   38061 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:57:22.063538   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:22.612353   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:23.111514   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:23.125136   38061 api_server.go:71] duration metric: took 1.061671301s to wait for apiserver process to appear ...
	I0906 15:57:23.125153   38061 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:57:23.125169   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:23.126463   38061 api_server.go:256] stopped: https://127.0.0.1:59968/healthz: Get "https://127.0.0.1:59968/healthz": EOF
	I0906 15:57:23.627256   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:26.490286   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 15:57:26.490307   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 15:57:26.627254   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:26.633634   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:57:26.633655   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:57:27.127381   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:27.133087   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:57:27.133100   38061 api_server.go:102] status: https://127.0.0.1:59968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:57:27.626768   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:27.633445   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 200:
	ok
	I0906 15:57:27.640047   38061 api_server.go:140] control plane version: v1.25.0
	I0906 15:57:27.661922   38061 api_server.go:130] duration metric: took 4.536744631s to wait for apiserver health ...
	I0906 15:57:27.661945   38061 cni.go:95] Creating CNI manager for ""
	I0906 15:57:27.661953   38061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:57:27.661968   38061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:57:27.669902   38061 system_pods.go:59] 9 kube-system pods found
	I0906 15:57:27.669922   38061 system_pods.go:61] "coredns-565d847f94-v2v64" [0dcab01c-2e4d-41bd-97e4-83387719a08a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:27.669928   38061 system_pods.go:61] "coredns-565d847f94-x2zfb" [c5c20b4d-204f-40b3-bb69-76373b532e0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:27.669933   38061 system_pods.go:61] "etcd-newest-cni-20220906155618-22187" [3daa576c-6b52-466a-a8d2-932e43340be3] Running
	I0906 15:57:27.669937   38061 system_pods.go:61] "kube-apiserver-newest-cni-20220906155618-22187" [7628f52b-b2df-4108-8a5b-a7be87bfcda6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:57:27.669941   38061 system_pods.go:61] "kube-controller-manager-newest-cni-20220906155618-22187" [a99f3285-f6c3-45ea-b605-169d6d139284] Running
	I0906 15:57:27.669945   38061 system_pods.go:61] "kube-proxy-c95tp" [58270e0b-3dbc-41bd-9301-6b57a78cb575] Running
	I0906 15:57:27.669951   38061 system_pods.go:61] "kube-scheduler-newest-cni-20220906155618-22187" [01916a12-c775-42b2-9eed-a4f2154502ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:57:27.669958   38061 system_pods.go:61] "metrics-server-5c8fd5cf8-sc5m5" [4f43d946-257b-4406-bad0-1a500e75d1fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:57:27.669964   38061 system_pods.go:61] "storage-provisioner" [4d676f03-34fa-46c9-8d96-bff836c74d3d] Running
	I0906 15:57:27.669973   38061 system_pods.go:74] duration metric: took 7.995857ms to wait for pod list to return data ...
	I0906 15:57:27.669982   38061 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:57:27.673950   38061 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:57:27.673965   38061 node_conditions.go:123] node cpu capacity is 6
	I0906 15:57:27.673976   38061 node_conditions.go:105] duration metric: took 3.983529ms to run NodePressure ...
	I0906 15:57:27.673990   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:57:27.906964   38061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 15:57:27.916175   38061 ops.go:34] apiserver oom_adj: -16
	I0906 15:57:27.916187   38061 kubeadm.go:631] restartCluster took 10.2893565s
	I0906 15:57:27.916195   38061 kubeadm.go:398] StartCluster complete in 10.32555533s
	I0906 15:57:27.916215   38061 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:27.916305   38061 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:57:27.918038   38061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:57:27.921275   38061 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220906155618-22187" rescaled to 1
	I0906 15:57:27.921320   38061 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 15:57:27.921352   38061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 15:57:27.960679   38061 out.go:177] * Verifying Kubernetes components...
	I0906 15:57:27.921363   38061 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 15:57:27.921528   38061 config.go:180] Loaded profile config "newest-cni-20220906155618-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:57:28.018705   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:57:28.018715   38061 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018719   38061 addons.go:65] Setting dashboard=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018744   38061 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220906155618-22187"
	I0906 15:57:28.018758   38061 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018769   38061 addons.go:153] Setting addon dashboard=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018769   38061 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.018725   38061 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220906155618-22187"
	W0906 15:57:28.018775   38061 addons.go:162] addon storage-provisioner should already be in state true
	I0906 15:57:28.018797   38061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220906155618-22187"
	W0906 15:57:28.018777   38061 addons.go:162] addon dashboard should already be in state true
	I0906 15:57:28.018864   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.018895   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	W0906 15:57:28.018782   38061 addons.go:162] addon metrics-server should already be in state true
	I0906 15:57:28.018985   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.019213   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.019277   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.019716   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.020396   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.125365   38061 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220906155618-22187"
	I0906 15:57:28.184608   38061 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	W0906 15:57:28.184636   38061 addons.go:162] addon default-storageclass should already be in state true
	I0906 15:57:28.130104   38061 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 15:57:28.130153   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.143458   38061 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 15:57:28.163258   38061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 15:57:28.184686   38061 host.go:66] Checking if "newest-cni-20220906155618-22187" exists ...
	I0906 15:57:28.208057   38061 cli_runner.go:164] Run: docker container inspect newest-cni-20220906155618-22187 --format={{.State.Status}}
	I0906 15:57:28.285485   38061 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 15:57:28.227773   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 15:57:28.264708   38061 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:57:28.306434   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 15:57:28.306465   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 15:57:28.306524   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 15:57:28.306539   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 15:57:28.306567   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.306572   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.306615   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.318662   38061 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:57:28.318783   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:57:28.338487   38061 api_server.go:71] duration metric: took 417.140277ms to wait for apiserver process to appear ...
	I0906 15:57:28.338507   38061 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:57:28.338551   38061 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59968/healthz ...
	I0906 15:57:28.349284   38061 api_server.go:266] https://127.0.0.1:59968/healthz returned 200:
	ok
	I0906 15:57:28.352422   38061 api_server.go:140] control plane version: v1.25.0
	I0906 15:57:28.352441   38061 api_server.go:130] duration metric: took 13.927721ms to wait for apiserver health ...
	I0906 15:57:28.352449   38061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:57:28.362871   38061 system_pods.go:59] 9 kube-system pods found
	I0906 15:57:28.362892   38061 system_pods.go:61] "coredns-565d847f94-v2v64" [0dcab01c-2e4d-41bd-97e4-83387719a08a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:28.362900   38061 system_pods.go:61] "coredns-565d847f94-x2zfb" [c5c20b4d-204f-40b3-bb69-76373b532e0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 15:57:28.362909   38061 system_pods.go:61] "etcd-newest-cni-20220906155618-22187" [3daa576c-6b52-466a-a8d2-932e43340be3] Running
	I0906 15:57:28.362922   38061 system_pods.go:61] "kube-apiserver-newest-cni-20220906155618-22187" [7628f52b-b2df-4108-8a5b-a7be87bfcda6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 15:57:28.362930   38061 system_pods.go:61] "kube-controller-manager-newest-cni-20220906155618-22187" [a99f3285-f6c3-45ea-b605-169d6d139284] Running
	I0906 15:57:28.362938   38061 system_pods.go:61] "kube-proxy-c95tp" [58270e0b-3dbc-41bd-9301-6b57a78cb575] Running
	I0906 15:57:28.362947   38061 system_pods.go:61] "kube-scheduler-newest-cni-20220906155618-22187" [01916a12-c775-42b2-9eed-a4f2154502ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:57:28.362957   38061 system_pods.go:61] "metrics-server-5c8fd5cf8-sc5m5" [4f43d946-257b-4406-bad0-1a500e75d1fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:57:28.362969   38061 system_pods.go:61] "storage-provisioner" [4d676f03-34fa-46c9-8d96-bff836c74d3d] Running
	I0906 15:57:28.362974   38061 system_pods.go:74] duration metric: took 10.521305ms to wait for pod list to return data ...
	I0906 15:57:28.362981   38061 default_sa.go:34] waiting for default service account to be created ...
	I0906 15:57:28.376559   38061 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 15:57:28.376571   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 15:57:28.376623   38061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220906155618-22187
	I0906 15:57:28.400382   38061 default_sa.go:45] found service account: "default"
	I0906 15:57:28.400411   38061 default_sa.go:55] duration metric: took 37.424247ms for default service account to be created ...
	I0906 15:57:28.400424   38061 kubeadm.go:573] duration metric: took 479.078431ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0906 15:57:28.400443   38061 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:57:28.403555   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.405108   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.405296   38061 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:57:28.405313   38061 node_conditions.go:123] node cpu capacity is 6
	I0906 15:57:28.405333   38061 node_conditions.go:105] duration metric: took 4.881031ms to run NodePressure ...
	I0906 15:57:28.405345   38061 start.go:216] waiting for startup goroutines ...
	I0906 15:57:28.407397   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.457981   38061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59964 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/newest-cni-20220906155618-22187/id_rsa Username:docker}
	I0906 15:57:28.554518   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 15:57:28.554533   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 15:57:28.611392   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 15:57:28.616594   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 15:57:28.619084   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 15:57:28.619108   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 15:57:28.622435   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 15:57:28.622448   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 15:57:28.697389   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 15:57:28.697406   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 15:57:28.714525   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 15:57:28.714539   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 15:57:28.721801   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 15:57:28.721813   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 15:57:28.734650   38061 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:57:28.734662   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 15:57:28.800327   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 15:57:28.802754   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 15:57:28.802765   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 15:57:28.904031   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 15:57:28.904044   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 15:57:28.919693   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 15:57:28.919709   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 15:57:28.937321   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 15:57:28.937335   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 15:57:28.952399   38061 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:57:28.952412   38061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 15:57:29.009410   38061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 15:57:29.520550   38061 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220906155618-22187"
	I0906 15:57:29.612280   38061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 15:57:29.686985   38061 addons.go:414] enableAddons completed in 1.765634263s
	I0906 15:57:29.721418   38061 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 15:57:29.742936   38061 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220906155618-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:57:14 UTC, end at Tue 2022-09-06 22:58:12 UTC. --
	Sep 06 22:57:16 newest-cni-20220906155618-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.594534107Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:57:16 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:16.601386079Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 22:57:28 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:28.438391740Z" level=info msg="ignoring event" container=a98e94475dc5fd2cbb13d1f45486c45139a22af69f7bf8b129948a825c479087 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:28 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:28.616203273Z" level=info msg="ignoring event" container=67916b6f42b8223be3461e04b4ca4e3b88976267aa4374cf2208942834092647 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:30 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:30.185900211Z" level=info msg="ignoring event" container=f1489789da494fd02a1a903ada0bce076f9fdc4d973d575c758dbad3154452a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:30 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:30.192573493Z" level=info msg="ignoring event" container=cca4a7193d04f316e168f959f6f1bcf62053b82ae344765470f8ff4c84675834 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:31 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:31.132669196Z" level=info msg="ignoring event" container=bc0c58d4e8074f6af48f4178bfa0a7cdfad56484347a0891190360d6d8a3f9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:57:31 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:57:31.163294826Z" level=info msg="ignoring event" container=ab189abb8cb941f06e281704c883889b451667428121c4df0e19b2a1607f53c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:08 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:08.197834711Z" level=info msg="ignoring event" container=9e81bede69edd735bbf6b7c98878b8882b03952ac59a6d8523ef50b70a2fd831 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:08 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:08.396443649Z" level=info msg="ignoring event" container=ed73a377a15652d1abde879b4f4d12b4297ead39a6157f6073cd92e0d29e0844 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:08 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:08.611094783Z" level=info msg="ignoring event" container=5113caa94f82da4d5458bde02f094d767c915e769c021c663d33d40a8db66b41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:08 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:08.810640658Z" level=info msg="ignoring event" container=cffa86fd598579c65aeed5a8de25fc79cb666e6f4c9331bb1d7d495c9409fdc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:09 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:09.057803253Z" level=info msg="ignoring event" container=1ab5a61b363e8216b553ca8cb206a1b8b87e4562e621adc45a8556423bf40ea1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:09 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:09.992306686Z" level=info msg="ignoring event" container=5c87373d51fff8e75b556caca75d87d4c3daba7fabb81484c8c7559cf3f1e62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:10 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:10.238794730Z" level=info msg="ignoring event" container=a0918c8a9da12c704101a82f57bea3316d2a05398720bfab97d65fa250ac7805 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:10 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:10.325930024Z" level=info msg="ignoring event" container=f8d7f8ca66833db62d36babc8344b3c826e9c9902da43ff7d8082ff9ff1c8bf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:10 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:10.326768411Z" level=info msg="ignoring event" container=3d9bc1ccbd6a0cc1c46558d3d0757fbbf25c32062c990ff7423985bf89136d8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:10 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:10.919439727Z" level=info msg="ignoring event" container=2b59a5ae63c4e26572ac85839acd4b6a477d60f53bb8b7f53d78c37764bfd9aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:11 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:11.001468231Z" level=info msg="ignoring event" container=e31bedace1741caf893e1999d17c9c3b71cdb8f212d4e5d0a0619ebc9a952c90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:11 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:11.008655730Z" level=info msg="ignoring event" container=2df2942db8f1c275fada71a8cd8fd6858a67bba2132655ede4b3a5023df3ff1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:12 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:12.137551735Z" level=info msg="ignoring event" container=dba7859836d0fc8d4b70e216b1fbf804f5d7d2118b1d7e57d99be1d3a9aef332 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:12 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:12.182735218Z" level=info msg="ignoring event" container=dc76946713c7246bb0b33aefda88be486844e1a4ebe453703a789b02a3e3d6a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:12 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:12.202412656Z" level=info msg="ignoring event" container=b61d29aa1df448033463ba4578eec68329067b9c745705a86d99739c582fd156 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 22:58:12 newest-cni-20220906155618-22187 dockerd[610]: time="2022-09-06T22:58:12.208170634Z" level=info msg="ignoring event" container=1ccb553058b0d3920ce74628f08f37317d48c4a21768ceb4ccffe7136e6f83d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7905db8500b06       6e38f40d628db       44 seconds ago       Running             storage-provisioner       1                   f2a36428b0ad7
	5145ef6106ee4       58a9a0c6d96f2       45 seconds ago       Running             kube-proxy                1                   74836d866bfe2
	dd0bd3030ab1a       a8a176a5d5d69       50 seconds ago       Running             etcd                      1                   83e31d1500b8a
	8a9751ebcecee       1a54c86c03a67       50 seconds ago       Running             kube-controller-manager   1                   4ac8fcb993a9f
	5ebff905e818d       bef2cf3115095       50 seconds ago       Running             kube-scheduler            1                   61380dd926407
	c00df6994e9d8       4d2edfd10d3e3       50 seconds ago       Running             kube-apiserver            1                   3ef124b7f4bfc
	02d6a781ff5fb       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   4fcd0402fca9a
	c8d7c56c27339       58a9a0c6d96f2       About a minute ago   Exited              kube-proxy                0                   e009409cdecf0
	69bf13ac53df3       a8a176a5d5d69       About a minute ago   Exited              etcd                      0                   15335225539e2
	ca51b97bb84ea       bef2cf3115095       About a minute ago   Exited              kube-scheduler            0                   4d6c248337013
	301a957fc9dc9       4d2edfd10d3e3       About a minute ago   Exited              kube-apiserver            0                   d8ce255eb4c23
	4332abe62e595       1a54c86c03a67       About a minute ago   Exited              kube-controller-manager   0                   4a180dfbd7191
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220906155618-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220906155618-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=newest-cni-20220906155618-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T15_56_44_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 22:56:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220906155618-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 22:58:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:56:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 06 Sep 2022 22:58:04 +0000   Tue, 06 Sep 2022 22:58:04 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220906155618-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                bf14c125-9485-443b-a7ea-21aee3a246d8
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-v2v64                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     76s
	  kube-system                 etcd-newest-cni-20220906155618-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-newest-cni-20220906155618-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-newest-cni-20220906155618-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-c95tp                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-newest-cni-20220906155618-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 metrics-server-5c8fd5cf8-sc5m5                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-l4lmd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-cfznr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeReady                89s                kubelet          Node newest-cni-20220906155618-22187 status is now: NodeReady
	  Normal  RegisteredNode           77s                node-controller  Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller
	  Normal  NodeHasSufficientMemory  51s (x5 over 51s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    51s (x5 over 51s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x4 over 51s)  kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             9s                 kubelet          Node newest-cni-20220906155618-22187 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [69bf13ac53df] <==
	* {"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220906155618-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:56:39.962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:56:39.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:56:39.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:56:39.965Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:00.273Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-09-06T22:57:00.273Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220906155618-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/09/06 22:57:00 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/09/06 22:57:00 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-09-06T22:57:00.339Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-09-06T22:57:00.341Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:00.342Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:00.342Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220906155618-22187","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [dd0bd3030ab1] <==
	* {"level":"info","ts":"2022-09-06T22:57:23.311Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-09-06T22:57:23.312Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-09-06T22:57:23.313Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:23.313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T22:57:23.315Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-06T22:57:23.315Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T22:57:23.316Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220906155618-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T22:57:24.957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T22:57:24.958Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T22:57:24.958Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:58:14 up  1:14,  0 users,  load average: 1.52, 1.10, 1.04
	Linux newest-cni-20220906155618-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [301a957fc9dc] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.241969       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.252993       1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 22:57:10.310962       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [c00df6994e9d] <==
	* I0906 22:57:26.604683       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 22:57:26.604750       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0906 22:57:26.609464       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 22:57:26.614384       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 22:57:26.624224       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 22:57:27.311452       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 22:57:27.495688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 22:57:27.621570       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:57:27.621605       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 22:57:27.621611       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 22:57:27.621620       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 22:57:27.621652       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 22:57:27.622626       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 22:57:27.817311       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 22:57:27.826493       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 22:57:27.850939       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 22:57:27.865639       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 22:57:27.870454       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 22:57:29.520120       1 controller.go:616] quota admission added evaluator for: namespaces
	I0906 22:57:29.588937       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.73.55]
	I0906 22:57:29.597123       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.63.137]
	I0906 22:58:04.196875       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 22:58:04.396812       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 22:58:04.511386       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [4332abe62e59] <==
	* I0906 22:56:56.393549       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:56:56.393592       1 shared_informer.go:262] Caches are synced for node
	I0906 22:56:56.393607       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:56:56.393610       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:56:56.393614       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:56:56.397592       1 range_allocator.go:367] Set node newest-cni-20220906155618-22187 PodCIDR to [192.168.0.0/24]
	I0906 22:56:56.438014       1 shared_informer.go:262] Caches are synced for endpoint
	I0906 22:56:56.439732       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 22:56:56.452503       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:56:56.478482       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0906 22:56:56.491419       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:56:56.522178       1 shared_informer.go:262] Caches are synced for job
	I0906 22:56:56.538440       1 shared_informer.go:262] Caches are synced for cronjob
	I0906 22:56:56.542805       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:56:56.906361       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:56:56.987304       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:56:56.987322       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:56:57.193835       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0906 22:56:57.205687       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0906 22:56:57.245460       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c95tp"
	I0906 22:56:57.391954       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-x2zfb"
	I0906 22:56:57.404827       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-v2v64"
	I0906 22:56:57.421129       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-x2zfb"
	I0906 22:56:59.646459       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 22:56:59.651715       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-sc5m5"
	
	* 
	* ==> kube-controller-manager [8a9751ebcece] <==
	* W0906 22:58:04.430048       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="newest-cni-20220906155618-22187" does not exist
	I0906 22:58:04.430699       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:58:04.430766       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 22:58:04.438062       1 shared_informer.go:262] Caches are synced for GC
	I0906 22:58:04.439414       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 22:58:04.446850       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 22:58:04.461330       1 shared_informer.go:262] Caches are synced for disruption
	I0906 22:58:04.461427       1 shared_informer.go:262] Caches are synced for TTL
	I0906 22:58:04.490235       1 shared_informer.go:262] Caches are synced for taint
	I0906 22:58:04.490582       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0906 22:58:04.490599       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0906 22:58:04.490664       1 taint_manager.go:209] "Sending events to api server"
	I0906 22:58:04.490767       1 event.go:294] "Event occurred" object="newest-cni-20220906155618-22187" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220906155618-22187 event: Registered Node newest-cni-20220906155618-22187 in Controller"
	W0906 22:58:04.490689       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-20220906155618-22187. Assuming now as a timestamp.
	I0906 22:58:04.490827       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0906 22:58:04.495307       1 shared_informer.go:262] Caches are synced for node
	I0906 22:58:04.495343       1 range_allocator.go:166] Starting range CIDR allocator
	I0906 22:58:04.495347       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0906 22:58:04.495353       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0906 22:58:04.499049       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 22:58:04.504815       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 22:58:04.918990       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:58:04.919006       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 22:58:04.922268       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 22:58:09.470632       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [5145ef6106ee] <==
	* I0906 22:57:27.997479       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:57:27.997531       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:57:27.997548       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:57:28.062388       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:57:28.062497       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:57:28.062514       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:57:28.062532       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:57:28.062578       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:57:28.062762       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:57:28.063088       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:57:28.063142       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:57:28.065527       1 config.go:317] "Starting service config controller"
	I0906 22:57:28.065549       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:57:28.065569       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:57:28.065572       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:57:28.066349       1 config.go:444] "Starting node config controller"
	I0906 22:57:28.066355       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:57:28.166355       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:57:28.166382       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:57:28.166452       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c8d7c56c2733] <==
	* I0906 22:56:58.247572       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 22:56:58.247651       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 22:56:58.247687       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 22:56:58.271190       1 server_others.go:206] "Using iptables Proxier"
	I0906 22:56:58.271232       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 22:56:58.271241       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 22:56:58.271252       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 22:56:58.271266       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:56:58.271492       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 22:56:58.271825       1 server.go:661] "Version info" version="v1.25.0"
	I0906 22:56:58.271852       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:56:58.272314       1 config.go:317] "Starting service config controller"
	I0906 22:56:58.272348       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 22:56:58.272361       1 config.go:226] "Starting endpoint slice config controller"
	I0906 22:56:58.272364       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 22:56:58.274663       1 config.go:444] "Starting node config controller"
	I0906 22:56:58.274692       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 22:56:58.372676       1 shared_informer.go:262] Caches are synced for service config
	I0906 22:56:58.372804       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 22:56:58.374778       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [5ebff905e818] <==
	* W0906 22:57:23.238073       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0906 22:57:23.918702       1 serving.go:348] Generated self-signed cert in-memory
	W0906 22:57:26.515558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 22:57:26.515756       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:57:26.515851       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 22:57:26.515926       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 22:57:26.534185       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.0"
	I0906 22:57:26.534371       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 22:57:26.536123       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 22:57:26.536208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 22:57:26.536661       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:57:26.536235       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 22:57:26.638031       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ca51b97bb84e] <==
	* W0906 22:56:41.555709       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 22:56:41.555743       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 22:56:41.555742       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 22:56:41.555813       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 22:56:41.555941       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:56:41.555970       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:56:42.431338       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 22:56:42.431418       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 22:56:42.437187       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.437241       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.472021       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.472074       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.492956       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 22:56:42.493011       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 22:56:42.591470       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 22:56:42.591515       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 22:56:42.652609       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 22:56:42.652656       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 22:56:42.666757       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 22:56:42.666796       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0906 22:56:45.638951       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 22:57:00.272767       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 22:57:00.272804       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 22:57:00.273000       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0906 22:57:00.273084       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:57:14 UTC, end at Tue 2022-09-06 22:58:16 UTC. --
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kube-system/coredns-565d847f94-v2v64"
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:15.741852    3624 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-565d847f94-v2v64_kube-system(0dcab01c-2e4d-41bd-97e4-83387719a08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-565d847f94-v2v64_kube-system(0dcab01c-2e4d-41bd-97e4-83387719a08a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"b80e8275a2f885f1ce973bc2c4a1553fcd047670d8f585dddd831caf73f2a2e2\\\" network for pod \\\"coredns-565d847f94-v2v64\\\": networkPlugin cni failed to set up pod \\\"coredns-565d847f94-v2v64_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"b80e8275a2f885f1ce973bc2c4a1553fcd047670d8f585dddd831caf73f2a2e2\\\" network for pod \\\"coredns-565d847f94-v2v64\\\": networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-v2v64_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-06ed4c0318ff6e7744df5195 -m comment --comment name: \\\"crio\\\" id: \\\"b80e8275a2f885f1ce973bc2c4a1553fcd047670d8f585dddd831caf73f2a2e2\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-06ed4c0318ff6e7744df5195':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-565d847f94-v2v64" podUID=0dcab01c-2e4d-41bd-97e4-83387719a08a
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:15.980519    3624 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err=<
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-6c27819fbe175297223d9220 -m comment --comment name: "crio" id: "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load tar
get `CNI-6c27819fbe175297223d9220':No such file or directory
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:  >
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:15.980567    3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-6c27819fbe175297223d9220 -m comment --comment name: "crio" id: "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load tar
get `CNI-6c27819fbe175297223d9220':No such file or directory
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-l4lmd"
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:15.980586    3624 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" network for pod "dashboard-metrics-scraper-7b94984548-l4lmd": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-6c27819fbe175297223d9220 -m comment --comment name: "crio" id: "c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load tar
get `CNI-6c27819fbe175297223d9220':No such file or directory
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         Try `iptables -h' or 'iptables --help' for more information.
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:         ]
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-l4lmd"
	Sep 06 22:58:15 newest-cni-20220906155618-22187 kubelet[3624]: E0906 22:58:15.980687    3624 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard(036636d7-e2e2-4b07-90af-2c403778af15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard(036636d7-e2e2-4b07-90af-2c403778af15)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d\\\" network for pod \\\"dashboard-metrics-scraper-7b94984548-l4lmd\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d\\\" network for pod \\\"dash
board-metrics-scraper-7b94984548-l4lmd\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-7b94984548-l4lmd_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-6c27819fbe175297223d9220 -m comment --comment name: \\\"crio\\\" id: \\\"c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6c27819fbe175297223d9220':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-l4lmd" podUID=036636d7-e2e2-4b07-90af-2c403778af15
	Sep 06 22:58:16 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:16.026638    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4c94ad8085564991f49f583680e520c7efac9268196467366e7ed044cb86302d"
	Sep 06 22:58:16 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:16.036904    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b80e8275a2f885f1ce973bc2c4a1553fcd047670d8f585dddd831caf73f2a2e2"
	Sep 06 22:58:16 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:16.045440    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c197edbe4ab0980d8fa84382c45401a18e7fb21137876647c86e8fdf7102243d"
	Sep 06 22:58:16 newest-cni-20220906155618-22187 kubelet[3624]: I0906 22:58:16.054930    3624 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3a3844deea42c527059a03d670007631acbb8a87879b4a57cbe69c9fea2d3b6a"
	
	* 
	* ==> storage-provisioner [02d6a781ff5f] <==
	* I0906 22:56:59.402011       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:56:59.409781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:56:59.409861       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:56:59.414294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:56:59.414482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0!
	I0906 22:56:59.414451       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25cd2a16-7724-4497-b738-8b32316e2630", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0 became leader
	I0906 22:56:59.514667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_189a4ab3-5ed6-4342-ad44-f821097217c0!
	
	* 
	* ==> storage-provisioner [7905db8500b0] <==
	* I0906 22:57:28.853423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 22:57:28.921020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 22:57:28.921314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 22:58:04.199365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 22:58:04.199549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5!
	I0906 22:58:04.202203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25cd2a16-7724-4497-b738-8b32316e2630", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5 became leader
	I0906 22:58:04.302220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220906155618-22187_0b6ca968-72ec-4204-8c11-1f6905a023a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220906155618-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr: exit status 1 (105.223864ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-v2v64" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-sc5m5" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-l4lmd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-cfznr" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220906155618-22187 describe pod coredns-565d847f94-v2v64 metrics-server-5c8fd5cf8-sc5m5 dashboard-metrics-scraper-7b94984548-l4lmd kubernetes-dashboard-54596f475f-cfznr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (46.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (42.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220906155821-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187: exit status 2 (16.084999752s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187: exit status 2 (16.081147129s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220906155821-22187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220906155821-22187
helpers_test.go:235: (dbg) docker inspect embed-certs-20220906155821-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099",
	        "Created": "2022-09-06T22:58:27.553768906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:59:31.549970745Z",
	            "FinishedAt": "2022-09-06T22:59:29.52348373Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/hosts",
	        "LogPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099-json.log",
	        "Name": "/embed-certs-20220906155821-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220906155821-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220906155821-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220906155821-22187",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220906155821-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220906155821-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220906155821-22187",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220906155821-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7e06add961644867baf0052ed3e0dbee57095f66a6a4d08976107dc7d0f32d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60235"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60236"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60237"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60238"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7e06add9616",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220906155821-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ef72ce85134",
	                        "embed-certs-20220906155821-22187"
	                    ],
	                    "NetworkID": "b1884146802eeb80d7a8e8de1d1caceb01aac205af1415343b6042b89d618623",
	                    "EndpointID": "c20ab62dc7978005c256ab050c7a7d95a240d3e44f89ff97038a1032b15a4ec5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220906155821-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220906155821-22187 logs -n 25: (2.604437938s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220906155820-22187      | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | disable-driver-mounts-20220906155820-22187                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:05 PDT | 06 Sep 22 16:05 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:59:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:59:30.262038   38636 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:59:30.262188   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262193   38636 out.go:309] Setting ErrFile to fd 2...
	I0906 15:59:30.262197   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262308   38636 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:59:30.262744   38636 out.go:303] Setting JSON to false
	I0906 15:59:30.277675   38636 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10741,"bootTime":1662494429,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:59:30.277782   38636 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:59:30.299234   38636 out.go:177] * [embed-certs-20220906155821-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:59:30.341461   38636 notify.go:193] Checking for updates...
	I0906 15:59:30.363080   38636 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:59:30.384168   38636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:30.405458   38636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:59:30.426996   38636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:59:30.448360   38636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:59:30.470635   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:30.471106   38636 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:59:30.539352   38636 docker.go:137] docker version: linux-20.10.17
	I0906 15:59:30.539462   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.670843   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.614641007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.712577   38636 out.go:177] * Using the docker driver based on existing profile
	I0906 15:59:30.734837   38636 start.go:284] selected driver: docker
	I0906 15:59:30.734870   38636 start.go:808] validating driver "docker" against &{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.735025   38636 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:59:30.738354   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.869658   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.81424686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.869799   38636 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:59:30.869818   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:30.869829   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:30.869843   38636 start_flags.go:310] config:
	{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.912149   38636 out.go:177] * Starting control plane node embed-certs-20220906155821-22187 in cluster embed-certs-20220906155821-22187
	I0906 15:59:30.933415   38636 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:59:30.954429   38636 out.go:177] * Pulling base image ...
	I0906 15:59:31.001627   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:31.001689   38636 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:59:31.001724   38636 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:59:31.001744   38636 cache.go:57] Caching tarball of preloaded images
	I0906 15:59:31.001934   38636 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:59:31.001957   38636 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:59:31.002893   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.066643   38636 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:59:31.066664   38636 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:59:31.066675   38636 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:59:31.066736   38636 start.go:364] acquiring machines lock for embed-certs-20220906155821-22187: {Name:mkf641e2928acfedb898f07b24fd168dccdc0551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:59:31.066861   38636 start.go:368] acquired machines lock for "embed-certs-20220906155821-22187" in 104.801µs
	I0906 15:59:31.066880   38636 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:59:31.066891   38636 fix.go:55] fixHost starting: 
	I0906 15:59:31.067105   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.130023   38636 fix.go:103] recreateIfNeeded on embed-certs-20220906155821-22187: state=Stopped err=<nil>
	W0906 15:59:31.130050   38636 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:59:31.173435   38636 out.go:177] * Restarting existing docker container for "embed-certs-20220906155821-22187" ...
	I0906 15:59:31.194813   38636 cli_runner.go:164] Run: docker start embed-certs-20220906155821-22187
	I0906 15:59:31.539043   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.604033   38636 kic.go:415] container "embed-certs-20220906155821-22187" state is running.
	I0906 15:59:31.604697   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:31.675958   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.676353   38636 machine.go:88] provisioning docker machine ...
	I0906 15:59:31.676379   38636 ubuntu.go:169] provisioning hostname "embed-certs-20220906155821-22187"
	I0906 15:59:31.676439   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.744270   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.744484   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.744500   38636 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220906155821-22187 && echo "embed-certs-20220906155821-22187" | sudo tee /etc/hostname
	I0906 15:59:31.866514   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220906155821-22187
	
	I0906 15:59:31.866600   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.931384   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.931532   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.931548   38636 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220906155821-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220906155821-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220906155821-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:59:32.043786   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.043809   38636 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:59:32.043831   38636 ubuntu.go:177] setting up certificates
	I0906 15:59:32.043843   38636 provision.go:83] configureAuth start
	I0906 15:59:32.043910   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:32.109953   38636 provision.go:138] copyHostCerts
	I0906 15:59:32.110077   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:59:32.110087   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:59:32.110175   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:59:32.110375   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:59:32.110389   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:59:32.110445   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:59:32.110625   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:59:32.110632   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:59:32.110688   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:59:32.110800   38636 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220906155821-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220906155821-22187]
	I0906 15:59:32.234910   38636 provision.go:172] copyRemoteCerts
	I0906 15:59:32.234973   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:59:32.235024   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.301797   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:32.384511   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:59:32.404630   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0906 15:59:32.423185   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:59:32.442534   38636 provision.go:86] duration metric: configureAuth took 398.671593ms
	I0906 15:59:32.442548   38636 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:59:32.442701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:32.442763   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.508255   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.508405   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.508426   38636 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:59:32.623407   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:59:32.623421   38636 ubuntu.go:71] root file system type: overlay
	I0906 15:59:32.623580   38636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:59:32.623645   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.688184   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.688365   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.688423   38636 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:59:32.811885   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:59:32.811975   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.875508   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.875661   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.875674   38636 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:59:32.994163   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.994185   38636 machine.go:91] provisioned docker machine in 1.317820355s
	I0906 15:59:32.994196   38636 start.go:300] post-start starting for "embed-certs-20220906155821-22187" (driver="docker")
	I0906 15:59:32.994202   38636 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:59:32.994271   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:59:32.994324   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.059474   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.140744   38636 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:59:33.144225   38636 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:59:33.144240   38636 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:59:33.144246   38636 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:59:33.144251   38636 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:59:33.144259   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:59:33.144377   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:59:33.144520   38636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:59:33.144661   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:59:33.151919   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:33.171420   38636 start.go:303] post-start completed in 177.213688ms
	I0906 15:59:33.171494   38636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:59:33.171543   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.236286   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.315015   38636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:59:33.319490   38636 fix.go:57] fixHost completed within 2.252593148s
	I0906 15:59:33.319503   38636 start.go:83] releasing machines lock for "embed-certs-20220906155821-22187", held for 2.252628285s
	I0906 15:59:33.319576   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:33.383050   38636 ssh_runner.go:195] Run: systemctl --version
	I0906 15:59:33.383109   38636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:59:33.383135   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.383168   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.450261   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.450290   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.581030   38636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:59:33.590993   38636 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:59:33.591044   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:59:33.602299   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:59:33.615635   38636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:59:33.686986   38636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:59:33.757095   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:33.825045   38636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:59:34.060910   38636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:59:34.126849   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:34.192180   38636 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:59:34.202955   38636 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:59:34.203017   38636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:59:34.206437   38636 start.go:471] Will wait 60s for crictl version
	I0906 15:59:34.206478   38636 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:59:34.302591   38636 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:59:34.302665   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.337107   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.413758   38636 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:59:34.413920   38636 cli_runner.go:164] Run: docker exec -t embed-certs-20220906155821-22187 dig +short host.docker.internal
	I0906 15:59:34.525925   38636 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:59:34.526040   38636 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:59:34.530030   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.539714   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:34.603049   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:34.603134   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.633537   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.633555   38636 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:59:34.633621   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.664984   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.665007   38636 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:59:34.665091   38636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:59:34.744509   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:34.744522   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:34.744536   38636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:59:34.744551   38636 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220906155821-22187 NodeName:embed-certs-20220906155821-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:59:34.744685   38636 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220906155821-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:59:34.744775   38636 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220906155821-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:59:34.744831   38636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:59:34.752036   38636 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:59:34.752086   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:59:34.758799   38636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0906 15:59:34.770909   38636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:59:34.782836   38636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0906 15:59:34.795526   38636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:59:34.799185   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.808319   38636 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187 for IP: 192.168.76.2
	I0906 15:59:34.808436   38636 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:59:34.808488   38636 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:59:34.808571   38636 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/client.key
	I0906 15:59:34.808633   38636 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key.31bdca25
	I0906 15:59:34.808689   38636 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key
	I0906 15:59:34.808881   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:59:34.808918   38636 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:59:34.808930   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:59:34.808969   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:59:34.809000   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:59:34.809031   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:59:34.809090   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:34.809639   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:59:34.826558   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:59:34.842729   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:59:34.859199   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:59:34.875553   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:59:34.892683   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:59:34.909267   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:59:34.925586   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:59:34.943279   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:59:34.960570   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:59:34.976829   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:59:34.993916   38636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:59:35.006394   38636 ssh_runner.go:195] Run: openssl version
	I0906 15:59:35.011296   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:59:35.019183   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023061   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023103   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.028251   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:59:35.035345   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:59:35.042841   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046567   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046608   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.051690   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:59:35.060553   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:59:35.068394   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072508   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072548   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.078010   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:59:35.085338   38636 kubeadm.go:396] StartCluster: {Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:35.085441   38636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:35.114198   38636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:59:35.121678   38636 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:59:35.121695   38636 kubeadm.go:627] restartCluster start
	I0906 15:59:35.121742   38636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:59:35.129021   38636 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.129082   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:35.193199   38636 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220906155821-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:35.193376   38636 kubeconfig.go:127] "embed-certs-20220906155821-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:59:35.193711   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:59:35.195111   38636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:59:35.203811   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.203867   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.212091   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.413063   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.413147   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.423469   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.613039   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.613124   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.622019   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.812186   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.812267   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.821025   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.013432   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.013565   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.023339   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.212268   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.212352   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.220885   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.412199   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.412282   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.421519   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.612305   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.612379   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.621617   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.812269   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.812442   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.821913   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.012008   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.012110   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.021439   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.212257   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.212414   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.221560   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.412154   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.412213   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.421151   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.611593   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.611679   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.620601   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.813302   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.813472   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.822723   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.013156   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.013257   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.023237   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.212440   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.212572   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.221850   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.221859   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.221904   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.229570   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.229582   38636 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:59:38.229589   38636 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:59:38.229646   38636 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:38.258980   38636 docker.go:443] Stopping containers: [3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d]
	I0906 15:59:38.259054   38636 ssh_runner.go:195] Run: docker stop 3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d
	I0906 15:59:38.288935   38636 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:59:38.298782   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:59:38.306417   38636 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Sep  6 22:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:58 /etc/kubernetes/scheduler.conf
	
	I0906 15:59:38.306467   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:59:38.313578   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:59:38.320753   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.327712   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.327753   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.334398   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:59:38.341325   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.341375   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:59:38.349241   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356713   38636 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356727   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:38.408089   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.277607   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.401052   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.451457   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.539398   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:59:39.539455   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.047870   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.548175   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.608683   38636 api_server.go:71] duration metric: took 1.069984323s to wait for apiserver process to appear ...
	I0906 15:59:40.608708   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:59:40.608729   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:40.609867   38636 api_server.go:256] stopped: https://127.0.0.1:60239/healthz: Get "https://127.0.0.1:60239/healthz": EOF
	I0906 15:59:41.110592   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:43.701073   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0906 15:59:43.701130   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0906 15:59:44.108296   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.115415   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.115431   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:44.608093   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.613832   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.613847   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:45.107569   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:45.113794   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 15:59:45.120558   38636 api_server.go:140] control plane version: v1.25.0
	I0906 15:59:45.120569   38636 api_server.go:130] duration metric: took 4.51431829s to wait for apiserver health ...
	I0906 15:59:45.120576   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:45.120585   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:45.120601   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:59:45.128405   38636 system_pods.go:59] 8 kube-system pods found
	I0906 15:59:45.128423   38636 system_pods.go:61] "coredns-565d847f94-5frt9" [0228f046-b179-4812-a7e5-c83cecc89e27] Running
	I0906 15:59:45.128429   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [c2de4fd6-a0ae-4f47-85de-74bcc70bdb2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:59:45.128433   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [0d53a9a2-f2dc-45fa-bce1-519c55da2cc4] Running
	I0906 15:59:45.128438   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [7cbb7baa-b9f1-4603-a7b9-8048df17b8dd] Running
	I0906 15:59:45.128443   38636 system_pods.go:61] "kube-proxy-zss4k" [f1dfb3a5-6fa4-48cf-95fa-0132b1ec5c8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:59:45.128448   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [f8ba94d8-2b42-4733-b705-bc6af0b91d1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:59:45.128453   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-cdg6d" [65746fe5-91aa-47c8-a8b4-d4a67f749ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:59:45.128456   38636 system_pods.go:61] "storage-provisioner" [13ae32f7-198b-4787-8687-aa39b2729274] Running
	I0906 15:59:45.128460   38636 system_pods.go:74] duration metric: took 7.85832ms to wait for pod list to return data ...
	I0906 15:59:45.128467   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:59:45.131418   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:59:45.131433   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 15:59:45.131442   38636 node_conditions.go:105] duration metric: took 2.974231ms to run NodePressure ...
	I0906 15:59:45.131454   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:45.310869   38636 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315021   38636 kubeadm.go:778] kubelet initialised
	I0906 15:59:45.315032   38636 kubeadm.go:779] duration metric: took 4.153612ms waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315041   38636 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:59:45.320463   38636 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326126   38636 pod_ready.go:92] pod "coredns-565d847f94-5frt9" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:45.326135   38636 pod_ready.go:81] duration metric: took 5.66283ms waiting for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326141   38636 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:47.335090   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:49.334484   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:51.337017   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:52.335838   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.335849   38636 pod_ready.go:81] duration metric: took 7.012332045s waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.335855   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.339996   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.340004   38636 pod_ready.go:81] duration metric: took 4.146291ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.340010   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:54.351029   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:56.848497   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:58.850674   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:59.347750   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.347764   38636 pod_ready.go:81] duration metric: took 7.009427345s waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.347771   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351913   38636 pod_ready.go:92] pod "kube-proxy-zss4k" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.351921   38636 pod_ready.go:81] duration metric: took 4.135355ms waiting for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351927   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356071   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.356080   38636 pod_ready.go:81] duration metric: took 4.1483ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356087   38636 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	I0906 16:00:01.365786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:03.365913   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:05.864397   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:07.865924   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:10.365936   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:12.864158   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:14.864836   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:16.865572   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:19.366603   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:21.863612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:23.865028   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:26.363858   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:28.364294   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:30.366125   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:32.865447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:35.362385   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:37.364530   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:39.863069   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:41.864919   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:44.363145   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:46.366591   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:48.863143   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:50.866878   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:53.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:55.364778   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:57.862437   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:59.863334   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:02.363223   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:04.864534   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:07.363948   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:09.862744   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:11.864192   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:14.364619   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:16.365257   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:18.864438   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:21.362761   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:23.364003   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:25.365931   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:27.862946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:29.864228   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:32.362786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:34.863359   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:37.365906   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:39.863888   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:42.362860   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:44.862363   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:46.864406   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:48.864866   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:50.866596   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:53.363229   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:55.864354   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:58.362250   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:00.862470   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:02.863209   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:04.864281   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:07.363645   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:09.364150   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:11.864765   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:13.865201   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:16.363299   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:18.862729   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:21.365287   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:23.865162   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:26.363102   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:28.363739   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:30.863089   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:32.863103   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:34.863473   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:36.863492   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:39.362249   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:41.364199   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:43.866447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:46.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:48.363997   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:50.861977   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:52.867206   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:55.363783   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:57.364091   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:59.863017   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:01.866522   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:04.364983   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:06.862786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:08.864389   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:11.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:13.863197   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:16.364032   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:18.365612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:20.365946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:22.864232   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:25.362338   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:27.862126   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:29.863682   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:31.863972   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:33.865141   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:36.363045   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:38.865132   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:41.364203   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:43.863753   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:46.362812   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:48.864502   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:50.864576   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:53.363874   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:55.864828   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:58.362706   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:59.356938   38636 pod_ready.go:81] duration metric: took 4m0.004474184s waiting for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	E0906 16:03:59.356974   38636 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 16:03:59.356999   38636 pod_ready.go:38] duration metric: took 4m14.04989418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:03:59.357025   38636 kubeadm.go:631] restartCluster took 4m24.248696346s
	W0906 16:03:59.357127   38636 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 16:03:59.357149   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 16:04:03.698932   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.341781129s)
	I0906 16:04:03.698999   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:03.708822   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:04:03.716300   38636 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 16:04:03.716346   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:04:03.724386   38636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:04:03.724421   38636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 16:04:03.767530   38636 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 16:04:03.767567   38636 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 16:04:03.863194   38636 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:04:03.863313   38636 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:04:03.863392   38636 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:04:03.985091   38636 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:04:04.009873   38636 out.go:204]   - Generating certificates and keys ...
	I0906 16:04:04.009938   38636 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 16:04:04.010013   38636 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 16:04:04.010092   38636 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 16:04:04.010151   38636 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 16:04:04.010224   38636 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 16:04:04.010326   38636 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 16:04:04.010382   38636 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 16:04:04.010428   38636 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 16:04:04.010506   38636 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 16:04:04.010568   38636 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 16:04:04.010599   38636 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 16:04:04.010644   38636 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:04:04.112141   38636 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:04:04.428252   38636 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:04:04.781321   38636 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:04:04.891466   38636 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:04:04.902953   38636 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:04:04.903733   38636 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:04:04.903840   38636 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 16:04:04.989147   38636 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:04:05.010782   38636 out.go:204]   - Booting up control plane ...
	I0906 16:04:05.010866   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:04:05.010943   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:04:05.011017   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:04:05.011077   38636 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:04:05.011220   38636 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:04:10.494832   38636 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503264 seconds
	I0906 16:04:10.494909   38636 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:04:10.501767   38636 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:04:11.013788   38636 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:04:11.013935   38636 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220906155821-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:04:11.519763   38636 kubeadm.go:317] [bootstrap-token] Using token: fqw8zb.b3unh498onihp969
	I0906 16:04:11.556084   38636 out.go:204]   - Configuring RBAC rules ...
	I0906 16:04:11.556186   38636 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:04:11.556258   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:04:11.595414   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:04:11.597593   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:04:11.600071   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:04:11.602066   38636 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:04:11.608914   38636 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:04:11.744220   38636 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 16:04:11.927532   38636 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 16:04:11.936157   38636 kubeadm.go:317] 
	I0906 16:04:11.936239   38636 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 16:04:11.936251   38636 kubeadm.go:317] 
	I0906 16:04:11.936347   38636 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 16:04:11.936360   38636 kubeadm.go:317] 
	I0906 16:04:11.936397   38636 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 16:04:11.936483   38636 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:04:11.936535   38636 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:04:11.936545   38636 kubeadm.go:317] 
	I0906 16:04:11.936592   38636 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 16:04:11.936601   38636 kubeadm.go:317] 
	I0906 16:04:11.936648   38636 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:04:11.936660   38636 kubeadm.go:317] 
	I0906 16:04:11.936721   38636 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 16:04:11.936790   38636 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:04:11.936860   38636 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:04:11.936870   38636 kubeadm.go:317] 
	I0906 16:04:11.936973   38636 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:04:11.937041   38636 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 16:04:11.937049   38636 kubeadm.go:317] 
	I0906 16:04:11.937130   38636 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937205   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 16:04:11.937225   38636 kubeadm.go:317] 	--control-plane 
	I0906 16:04:11.937230   38636 kubeadm.go:317] 
	I0906 16:04:11.937297   38636 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:04:11.937303   38636 kubeadm.go:317] 
	I0906 16:04:11.937368   38636 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937490   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 16:04:11.940643   38636 kubeadm.go:317] W0906 23:04:03.783659    7834 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 16:04:11.940759   38636 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 16:04:11.940841   38636 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 16:04:11.940910   38636 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:04:11.940926   38636 cni.go:95] Creating CNI manager for ""
	I0906 16:04:11.940937   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:04:11.940954   38636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:04:11.941016   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:11.941027   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=embed-certs-20220906155821-22187 minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.053740   38636 ops.go:34] apiserver oom_adj: -16
	I0906 16:04:12.053787   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.629790   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.129829   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.630701   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.129844   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.629847   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.129938   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.630450   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.129967   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.629971   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.130355   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.631117   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.130189   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.630017   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.131937   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.630247   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.130104   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.630932   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.129928   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.630617   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.129774   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.629879   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.129817   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.631908   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.129837   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.629870   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.693469   38636 kubeadm.go:1046] duration metric: took 12.752546325s to wait for elevateKubeSystemPrivileges.
	I0906 16:04:24.693487   38636 kubeadm.go:398] StartCluster complete in 4m49.621602402s
	I0906 16:04:24.693510   38636 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:24.693618   38636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 16:04:24.694416   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:25.209438   38636 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220906155821-22187" rescaled to 1
	I0906 16:04:25.209475   38636 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:04:25.209488   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:04:25.209543   38636 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 16:04:25.248550   38636 out.go:177] * Verifying Kubernetes components...
	I0906 16:04:25.209701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 16:04:25.248613   38636 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248614   38636 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248617   38636 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248621   38636 addons.go:65] Setting dashboard=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.274065   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:04:25.323012   38636 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323027   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:25.323031   38636 addons.go:153] Setting addon dashboard=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323035   38636 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323041   38636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220906155821-22187"
	W0906 16:04:25.349810   38636 addons.go:162] addon storage-provisioner should already be in state true
	W0906 16:04:25.349817   38636 addons.go:162] addon metrics-server should already be in state true
	W0906 16:04:25.349808   38636 addons.go:162] addon dashboard should already be in state true
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350008   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350278   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351712   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351778   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351905   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.372800   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.479636   38636 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.537415   38636 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 16:04:25.500699   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 16:04:25.537466   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 16:04:25.579923   38636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:04:25.616492   38636 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.580057   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.618390   38636 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.675937   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 16:04:25.634198   38636 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.675960   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 16:04:25.654052   38636 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:25.676027   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0906 16:04:25.675946   38636 addons.go:162] addon default-storageclass should already be in state true
	I0906 16:04:25.676093   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676134   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676180   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.680582   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.694583   38636 node_ready.go:49] node "embed-certs-20220906155821-22187" has status "Ready":"True"
	I0906 16:04:25.694606   38636 node_ready.go:38] duration metric: took 18.642476ms waiting for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.694617   38636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:25.703428   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:25.769082   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.770815   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.771641   38636 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:25.771655   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:04:25.771721   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.771828   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.846515   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.908743   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 16:04:25.908759   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 16:04:25.923614   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:26.012628   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 16:04:26.012643   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 16:04:26.093532   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 16:04:26.093544   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 16:04:26.107106   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:26.111721   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 16:04:26.111737   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 16:04:26.197860   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 16:04:26.197879   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 16:04:26.222994   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 16:04:26.223005   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 16:04:26.290198   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.290219   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 16:04:26.306943   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 16:04:26.306956   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 16:04:26.389305   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.404625   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 16:04:26.404642   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 16:04:26.502869   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 16:04:26.502883   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 16:04:26.586788   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 16:04:26.586801   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 16:04:26.602971   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.602986   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 16:04:26.687833   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.989360   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639629341s)
	I0906 16:04:26.989402   38636 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 16:04:27.019123   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095487172s)
	I0906 16:04:27.105458   38636 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:27.721184   38636 pod_ready.go:92] pod "coredns-565d847f94-7hgsh" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:27.721200   38636 pod_ready.go:81] duration metric: took 2.017760025s waiting for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.721212   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.884983   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.197113945s)
	I0906 16:04:27.919906   38636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 16:04:27.956698   38636 addons.go:414] enableAddons completed in 2.747190456s
	I0906 16:04:29.734002   38636 pod_ready.go:102] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"False"
	I0906 16:04:30.232781   38636 pod_ready.go:92] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.232795   38636 pod_ready.go:81] duration metric: took 2.511583495s waiting for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.232802   38636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241018   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.241028   38636 pod_ready.go:81] duration metric: took 8.220934ms waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241036   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246347   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.246358   38636 pod_ready.go:81] duration metric: took 5.317921ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246365   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.251178   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.271910   38636 pod_ready.go:81] duration metric: took 25.535498ms waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.271928   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278165   38636 pod_ready.go:92] pod "kube-proxy-k97f9" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.278179   38636 pod_ready.go:81] duration metric: took 6.242796ms waiting for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278197   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630702   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.630713   38636 pod_ready.go:81] duration metric: took 352.505269ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630719   38636 pod_ready.go:38] duration metric: took 4.93610349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:30.630735   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 16:04:30.630784   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:04:30.645666   38636 api_server.go:71] duration metric: took 5.436188155s to wait for apiserver process to appear ...
	I0906 16:04:30.645679   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 16:04:30.645686   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 16:04:30.651159   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 16:04:30.652511   38636 api_server.go:140] control plane version: v1.25.0
	I0906 16:04:30.652524   38636 api_server.go:130] duration metric: took 6.840548ms to wait for apiserver health ...
	I0906 16:04:30.652530   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:04:30.833833   38636 system_pods.go:59] 9 kube-system pods found
	I0906 16:04:30.833849   38636 system_pods.go:61] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:30.833853   38636 system_pods.go:61] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:30.833859   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:30.833862   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:30.833867   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:30.833872   38636 system_pods.go:61] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:30.833878   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:30.833885   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:30.833893   38636 system_pods.go:61] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:30.833900   38636 system_pods.go:74] duration metric: took 181.366286ms to wait for pod list to return data ...
	I0906 16:04:30.833906   38636 default_sa.go:34] waiting for default service account to be created ...
	I0906 16:04:31.030564   38636 default_sa.go:45] found service account: "default"
	I0906 16:04:31.030579   38636 default_sa.go:55] duration metric: took 196.655364ms for default service account to be created ...
	I0906 16:04:31.030585   38636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 16:04:31.234390   38636 system_pods.go:86] 9 kube-system pods found
	I0906 16:04:31.234405   38636 system_pods.go:89] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:31.234410   38636 system_pods.go:89] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:31.234413   38636 system_pods.go:89] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:31.234417   38636 system_pods.go:89] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:31.234427   38636 system_pods.go:89] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:31.234434   38636 system_pods.go:89] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:31.234438   38636 system_pods.go:89] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:31.234445   38636 system_pods.go:89] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:31.234449   38636 system_pods.go:89] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:31.234455   38636 system_pods.go:126] duration metric: took 203.86794ms to wait for k8s-apps to be running ...
	I0906 16:04:31.234461   38636 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 16:04:31.234511   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:31.244461   38636 system_svc.go:56] duration metric: took 9.993449ms WaitForService to wait for kubelet.
	I0906 16:04:31.244474   38636 kubeadm.go:573] duration metric: took 6.035000594s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 16:04:31.244487   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:04:31.430989   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 16:04:31.431001   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 16:04:31.431008   38636 node_conditions.go:105] duration metric: took 186.51865ms to run NodePressure ...
	I0906 16:04:31.431017   38636 start.go:216] waiting for startup goroutines ...
	I0906 16:04:31.467536   38636 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 16:04:31.509529   38636 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220906155821-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:59:31 UTC, end at Tue 2022-09-06 23:05:21 UTC. --
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.523664517Z" level=info msg="ignoring event" container=607b0a83403a4147ab7c157f64605a138b630b15d8fa6c96fb5fe0548f78f904 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.594629224Z" level=info msg="ignoring event" container=a102512a908c578116b4e64385c5a0d5abf4621f477af2f15c60dd0bfe766e5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.663288926Z" level=info msg="ignoring event" container=3553ecd0470e12d62207340a4287e7241face56b799222aaffc29abf65de5154 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.743807375Z" level=info msg="ignoring event" container=63516ce8427c3652cc408b32127bc46c3288b401fcb2f3d9e9886b1ae5d61eee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.816983452Z" level=info msg="ignoring event" container=92e504425015f6694b5193e4bff39b519a743107dfe95a14ee00c69e7659392e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.887031873Z" level=info msg="ignoring event" container=3e9526bd593d6fa19262a77ffcf3e3e9d0614b9c989c1be95d61d2094bbc89d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.960416254Z" level=info msg="ignoring event" container=9a3eb83394a58bea8d623e5fdd50ad61f681fa2c52f2ac2eb8defbd91e0c958f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.033331515Z" level=info msg="ignoring event" container=39060d60adc24b1d4987133edd9d517608ae65bb59235c4db759c241fe0823fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.099853416Z" level=info msg="ignoring event" container=68d4bd8cb2b7d172bcfd32b932276ca792fbd5929171959d0240f45f614d9eb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.203316804Z" level=info msg="ignoring event" container=dd083af036b897cf9192632b86d9024e5b22472b36eb89c3b9fd96e92a7bc5c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.266198047Z" level=info msg="ignoring event" container=f9761cec1fd49649716fe3be0102bc9455aa518242b895dbf3f0c53079c001a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.387062809Z" level=info msg="ignoring event" container=9d4dd2ec1fae9c25cffc8d9dac79dc346255ec432f0a0cee71a57e269e90e450 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.642540605Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.642583450Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.644792187Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:28 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:28.931409174Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 23:04:32 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:32.245174109Z" level=info msg="ignoring event" container=dd948dba4b0a98f82d36fe2ed92ec89b34d46cb38c394adc68af4a561a49d2d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:32 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:32.427411636Z" level=info msg="ignoring event" container=da872e1e0000e6fac34dde89ae3e13635ae0f1746dcfcc8d219d156d94bbb3ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:35 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:35.925164190Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 23:04:36 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:36.898675727Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.911233810Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.911281852Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.934977986Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:43 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:43.127858961Z" level=info msg="ignoring event" container=3773d6d6cea7a35b5c686a8b34f8501e8af1b635790d0f4ada7c95aa70fa8fac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:43 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:43.435681470Z" level=info msg="ignoring event" container=58253699cbd5ccc905b37fd1b4b3755af528d8a40b8ebb64352034aff9210ffe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	58253699cbd5c       a90209bb39e3d                                                                                    38 seconds ago       Exited              dashboard-metrics-scraper   1                   f98c8b1cd8708
	a5ce9d4ee1154       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   48 seconds ago       Running             kubernetes-dashboard        0                   14bd27279d043
	0bf6434f44ba0       6e38f40d628db                                                                                    54 seconds ago       Running             storage-provisioner         0                   11dc8deb9ec05
	245918d1156c9       5185b96f0becf                                                                                    55 seconds ago       Running             coredns                     0                   e8a490cda9647
	488f4bb96fbdc       58a9a0c6d96f2                                                                                    56 seconds ago       Running             kube-proxy                  0                   ea6c7e3349e2c
	360ee6cb836eb       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   8c803c85c284b
	2c5c4dd4599e7       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   05c8c47bf18f9
	a6ab282f9e2e8       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   a40cdf01a8a30
	2a62b90be79e6       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   8b1c38051a5bb
	
	* 
	* ==> coredns [245918d1156c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220906155821-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220906155821-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=embed-certs-20220906155821-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 23:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220906155821-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 23:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:05:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220906155821-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                1dba8370-9279-4ea7-9dc1-f6d32eb7589f
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-hwccr                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     57s
	  kube-system                 etcd-embed-certs-20220906155821-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kube-apiserver-embed-certs-20220906155821-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-embed-certs-20220906155821-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-k97f9                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-embed-certs-20220906155821-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 metrics-server-5c8fd5cf8-xq9zv                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-zz2mf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-8dtl6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x3 over 76s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x3 over 76s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x2 over 76s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     70s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                70s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeReady
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           58s                node-controller  Node embed-certs-20220906155821-22187 event: Registered Node embed-certs-20220906155821-22187 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeNotReady
	  Normal  NodeReady                2s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [360ee6cb836e] <==
	* {"level":"info","ts":"2022-09-06T23:04:06.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T23:04:06.305Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T23:04:06.305Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.950Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220906155821-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T23:04:06.950Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-09-06T23:04:25.631Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"134.315816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-09-06T23:04:25.631Z","caller":"traceutil/trace.go:171","msg":"trace[1816448117] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:359; }","duration":"134.435927ms","start":"2022-09-06T23:04:25.497Z","end":"2022-09-06T23:04:25.631Z","steps":["trace[1816448117] 'range keys from in-memory index tree'  (duration: 134.056276ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-06T23:04:25.631Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.509473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2022-09-06T23:04:25.631Z","caller":"traceutil/trace.go:171","msg":"trace[581965917] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:359; }","duration":"102.869659ms","start":"2022-09-06T23:04:25.528Z","end":"2022-09-06T23:04:25.631Z","steps":["trace[581965917] 'range keys from in-memory index tree'  (duration: 102.423669ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:05:22 up  1:21,  0 users,  load average: 1.27, 1.06, 1.04
	Linux embed-certs-20220906155821-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2a62b90be79e] <==
	* I0906 23:04:09.955205       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 23:04:09.955234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 23:04:10.213112       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 23:04:10.236038       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 23:04:10.385781       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 23:04:10.389292       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 23:04:10.389924       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 23:04:10.392591       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 23:04:10.985631       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 23:04:11.742709       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 23:04:11.747703       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 23:04:11.753774       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 23:04:11.821476       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 23:04:24.243969       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 23:04:24.443378       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 23:04:27.114275       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.98.105.71]
	I0906 23:04:27.833430       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.113.30]
	I0906 23:04:27.843694       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.102.183.107]
	W0906 23:04:27.944072       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 23:04:27.944112       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 23:04:27.944117       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 23:04:27.944155       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 23:04:27.944188       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 23:04:27.945223       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2c5c4dd4599e] <==
	* I0906 23:04:24.843280       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-7hgsh"
	I0906 23:04:24.846851       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-hwccr"
	I0906 23:04:24.862440       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-7hgsh"
	I0906 23:04:26.949316       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 23:04:27.003188       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-xq9zv"
	I0906 23:04:27.718901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 23:04:27.724386       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.729164       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.732492       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.732888       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.738609       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.738734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.740690       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	I0906 23:04:27.746643       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.752720       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.758319       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.758529       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.798381       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.798454       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.798474       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.798485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.803077       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-8dtl6"
	I0906 23:04:27.841515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-zz2mf"
	E0906 23:05:18.959379       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 23:05:18.967107       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [488f4bb96fbd] <==
	* I0906 23:04:25.797060       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 23:04:25.797137       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 23:04:25.797173       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 23:04:25.896727       1 server_others.go:206] "Using iptables Proxier"
	I0906 23:04:25.896810       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 23:04:25.896826       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 23:04:25.896844       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 23:04:25.896882       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 23:04:25.897004       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 23:04:25.897181       1 server.go:661] "Version info" version="v1.25.0"
	I0906 23:04:25.897192       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:04:25.901687       1 config.go:317] "Starting service config controller"
	I0906 23:04:25.901719       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 23:04:25.901757       1 config.go:226] "Starting endpoint slice config controller"
	I0906 23:04:25.901762       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 23:04:25.904035       1 config.go:444] "Starting node config controller"
	I0906 23:04:25.907488       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 23:04:26.002846       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 23:04:26.002897       1 shared_informer.go:262] Caches are synced for service config
	I0906 23:04:26.008673       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a6ab282f9e2e] <==
	* W0906 23:04:09.001539       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.001655       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.001687       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:04:09.001698       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 23:04:09.001868       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 23:04:09.001699       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:04:09.001982       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:04:09.002004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 23:04:09.001747       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.002060       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.002022       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:04:09.002075       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 23:04:09.002326       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.002387       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.912865       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 23:04:09.912920       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 23:04:09.948704       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.948758       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.959930       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:04:09.959969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 23:04:10.057029       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:10.057070       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:10.092021       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:04:10.092061       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0906 23:04:10.294004       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:59:31 UTC, end at Tue 2022-09-06 23:05:22 UTC. --
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.423873   10973 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.423907   10973 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.423938   10973 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.423963   10973 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.423991   10973 topology_manager.go:205] "Topology Admit Handler"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462199   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36966060-5270-424c-a005-81413d70656a-xtables-lock\") pod \"kube-proxy-k97f9\" (UID: \"36966060-5270-424c-a005-81413d70656a\") " pod="kube-system/kube-proxy-k97f9"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462269   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74rxz\" (UniqueName: \"kubernetes.io/projected/36966060-5270-424c-a005-81413d70656a-kube-api-access-74rxz\") pod \"kube-proxy-k97f9\" (UID: \"36966060-5270-424c-a005-81413d70656a\") " pod="kube-system/kube-proxy-k97f9"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462297   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q66zp\" (UniqueName: \"kubernetes.io/projected/076a5aac-3ba3-4dce-aa96-bcf6faa2dc24-kube-api-access-q66zp\") pod \"dashboard-metrics-scraper-7b94984548-zz2mf\" (UID: \"076a5aac-3ba3-4dce-aa96-bcf6faa2dc24\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-zz2mf"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462317   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36966060-5270-424c-a005-81413d70656a-kube-proxy\") pod \"kube-proxy-k97f9\" (UID: \"36966060-5270-424c-a005-81413d70656a\") " pod="kube-system/kube-proxy-k97f9"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462341   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b44daf06-cea8-4179-b626-1a1e13fc9778-tmp-volume\") pod \"kubernetes-dashboard-54596f475f-8dtl6\" (UID: \"b44daf06-cea8-4179-b626-1a1e13fc9778\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-8dtl6"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462360   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/076a5aac-3ba3-4dce-aa96-bcf6faa2dc24-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-zz2mf\" (UID: \"076a5aac-3ba3-4dce-aa96-bcf6faa2dc24\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-zz2mf"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462378   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd-tmp\") pod \"storage-provisioner\" (UID: \"1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd\") " pod="kube-system/storage-provisioner"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462400   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tds2w\" (UniqueName: \"kubernetes.io/projected/1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd-kube-api-access-tds2w\") pod \"storage-provisioner\" (UID: \"1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd\") " pod="kube-system/storage-provisioner"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462573   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/73f275fe-7d42-400b-ad93-df387c9ed53d-tmp-dir\") pod \"metrics-server-5c8fd5cf8-xq9zv\" (UID: \"73f275fe-7d42-400b-ad93-df387c9ed53d\") " pod="kube-system/metrics-server-5c8fd5cf8-xq9zv"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462677   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctwq\" (UniqueName: \"kubernetes.io/projected/b44daf06-cea8-4179-b626-1a1e13fc9778-kube-api-access-gctwq\") pod \"kubernetes-dashboard-54596f475f-8dtl6\" (UID: \"b44daf06-cea8-4179-b626-1a1e13fc9778\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-8dtl6"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462714   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14797c46-59df-423f-9376-8faa955f2426-config-volume\") pod \"coredns-565d847f94-hwccr\" (UID: \"14797c46-59df-423f-9376-8faa955f2426\") " pod="kube-system/coredns-565d847f94-hwccr"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462765   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36966060-5270-424c-a005-81413d70656a-lib-modules\") pod \"kube-proxy-k97f9\" (UID: \"36966060-5270-424c-a005-81413d70656a\") " pod="kube-system/kube-proxy-k97f9"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462824   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv49b\" (UniqueName: \"kubernetes.io/projected/14797c46-59df-423f-9376-8faa955f2426-kube-api-access-cv49b\") pod \"coredns-565d847f94-hwccr\" (UID: \"14797c46-59df-423f-9376-8faa955f2426\") " pod="kube-system/coredns-565d847f94-hwccr"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462855   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4m67\" (UniqueName: \"kubernetes.io/projected/73f275fe-7d42-400b-ad93-df387c9ed53d-kube-api-access-d4m67\") pod \"metrics-server-5c8fd5cf8-xq9zv\" (UID: \"73f275fe-7d42-400b-ad93-df387c9ed53d\") " pod="kube-system/metrics-server-5c8fd5cf8-xq9zv"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462881   10973 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:21.621794   10973 request.go:601] Waited for 1.10350225s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:21.670689   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220906155821-22187"
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:21.836740   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220906155821-22187"
	Sep 06 23:05:22 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:22.092013   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220906155821-22187\" already exists" pod="kube-system/etcd-embed-certs-20220906155821-22187"
	Sep 06 23:05:22 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:22.249561   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220906155821-22187"
	
	* 
	* ==> kubernetes-dashboard [a5ce9d4ee115] <==
	* 2022/09/06 23:04:33 Using namespace: kubernetes-dashboard
	2022/09/06 23:04:33 Using in-cluster config to connect to apiserver
	2022/09/06 23:04:33 Using secret token for csrf signing
	2022/09/06 23:04:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 23:04:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 23:04:33 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 23:04:33 Generating JWE encryption key
	2022/09/06 23:04:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 23:04:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 23:04:34 Initializing JWE encryption key from synchronized object
	2022/09/06 23:04:34 Creating in-cluster Sidecar client
	2022/09/06 23:04:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 23:04:34 Serving insecurely on HTTP port: 9090
	2022/09/06 23:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 23:04:33 Starting overwatch
	
	* 
	* ==> storage-provisioner [0bf6434f44ba] <==
	* I0906 23:04:28.071934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:04:28.079661       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:04:28.079724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:04:28.084421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:04:28.084535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81!
	I0906 23:04:28.084514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18b8a35b-b458-4ed7-8e53-7663543ebb78", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81 became leader
	I0906 23:04:28.184842       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-xq9zv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv: exit status 1 (56.76703ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-xq9zv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220906155821-22187
helpers_test.go:235: (dbg) docker inspect embed-certs-20220906155821-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099",
	        "Created": "2022-09-06T22:58:27.553768906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:59:31.549970745Z",
	            "FinishedAt": "2022-09-06T22:59:29.52348373Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/hosts",
	        "LogPath": "/var/lib/docker/containers/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099/3ef72ce85134a889db1250625f3bd3ed2266e7a7217a471da940a0691008d099-json.log",
	        "Name": "/embed-certs-20220906155821-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220906155821-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220906155821-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33b9b39b8de3dd3cf661a150ecae4b4103a3bbbc06c24db5ede6ea05bccd5c24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220906155821-22187",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220906155821-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220906155821-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220906155821-22187",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220906155821-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7e06add961644867baf0052ed3e0dbee57095f66a6a4d08976107dc7d0f32d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60235"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60236"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60237"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60238"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7e06add9616",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220906155821-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ef72ce85134",
	                        "embed-certs-20220906155821-22187"
	                    ],
	                    "NetworkID": "b1884146802eeb80d7a8e8de1d1caceb01aac205af1415343b6042b89d618623",
	                    "EndpointID": "c20ab62dc7978005c256ab050c7a7d95a240d3e44f89ff97038a1032b15a4ec5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220906155821-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220906155821-22187 logs -n 25: (2.798435645s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:50 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220906155820-22187      | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | disable-driver-mounts-20220906155820-22187                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:05 PDT | 06 Sep 22 16:05 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:59:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:59:30.262038   38636 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:59:30.262188   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262193   38636 out.go:309] Setting ErrFile to fd 2...
	I0906 15:59:30.262197   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262308   38636 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:59:30.262744   38636 out.go:303] Setting JSON to false
	I0906 15:59:30.277675   38636 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10741,"bootTime":1662494429,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:59:30.277782   38636 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:59:30.299234   38636 out.go:177] * [embed-certs-20220906155821-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:59:30.341461   38636 notify.go:193] Checking for updates...
	I0906 15:59:30.363080   38636 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:59:30.384168   38636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:30.405458   38636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:59:30.426996   38636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:59:30.448360   38636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:59:30.470635   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:30.471106   38636 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:59:30.539352   38636 docker.go:137] docker version: linux-20.10.17
	I0906 15:59:30.539462   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.670843   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.614641007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.712577   38636 out.go:177] * Using the docker driver based on existing profile
	I0906 15:59:30.734837   38636 start.go:284] selected driver: docker
	I0906 15:59:30.734870   38636 start.go:808] validating driver "docker" against &{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.735025   38636 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:59:30.738354   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.869658   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.81424686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.869799   38636 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:59:30.869818   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:30.869829   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:30.869843   38636 start_flags.go:310] config:
	{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.912149   38636 out.go:177] * Starting control plane node embed-certs-20220906155821-22187 in cluster embed-certs-20220906155821-22187
	I0906 15:59:30.933415   38636 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:59:30.954429   38636 out.go:177] * Pulling base image ...
	I0906 15:59:31.001627   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:31.001689   38636 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:59:31.001724   38636 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:59:31.001744   38636 cache.go:57] Caching tarball of preloaded images
	I0906 15:59:31.001934   38636 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:59:31.001957   38636 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:59:31.002893   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.066643   38636 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:59:31.066664   38636 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:59:31.066675   38636 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:59:31.066736   38636 start.go:364] acquiring machines lock for embed-certs-20220906155821-22187: {Name:mkf641e2928acfedb898f07b24fd168dccdc0551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:59:31.066861   38636 start.go:368] acquired machines lock for "embed-certs-20220906155821-22187" in 104.801µs
	I0906 15:59:31.066880   38636 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:59:31.066891   38636 fix.go:55] fixHost starting: 
	I0906 15:59:31.067105   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.130023   38636 fix.go:103] recreateIfNeeded on embed-certs-20220906155821-22187: state=Stopped err=<nil>
	W0906 15:59:31.130050   38636 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:59:31.173435   38636 out.go:177] * Restarting existing docker container for "embed-certs-20220906155821-22187" ...
	I0906 15:59:31.194813   38636 cli_runner.go:164] Run: docker start embed-certs-20220906155821-22187
	I0906 15:59:31.539043   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.604033   38636 kic.go:415] container "embed-certs-20220906155821-22187" state is running.
	I0906 15:59:31.604697   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:31.675958   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.676353   38636 machine.go:88] provisioning docker machine ...
	I0906 15:59:31.676379   38636 ubuntu.go:169] provisioning hostname "embed-certs-20220906155821-22187"
	I0906 15:59:31.676439   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.744270   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.744484   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.744500   38636 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220906155821-22187 && echo "embed-certs-20220906155821-22187" | sudo tee /etc/hostname
	I0906 15:59:31.866514   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220906155821-22187
	
	I0906 15:59:31.866600   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.931384   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.931532   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.931548   38636 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220906155821-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220906155821-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220906155821-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:59:32.043786   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.043809   38636 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:59:32.043831   38636 ubuntu.go:177] setting up certificates
	I0906 15:59:32.043843   38636 provision.go:83] configureAuth start
	I0906 15:59:32.043910   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:32.109953   38636 provision.go:138] copyHostCerts
	I0906 15:59:32.110077   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:59:32.110087   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:59:32.110175   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:59:32.110375   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:59:32.110389   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:59:32.110445   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:59:32.110625   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:59:32.110632   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:59:32.110688   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:59:32.110800   38636 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220906155821-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220906155821-22187]
	I0906 15:59:32.234910   38636 provision.go:172] copyRemoteCerts
	I0906 15:59:32.234973   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:59:32.235024   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.301797   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:32.384511   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:59:32.404630   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0906 15:59:32.423185   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:59:32.442534   38636 provision.go:86] duration metric: configureAuth took 398.671593ms
	I0906 15:59:32.442548   38636 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:59:32.442701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:32.442763   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.508255   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.508405   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.508426   38636 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:59:32.623407   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:59:32.623421   38636 ubuntu.go:71] root file system type: overlay
	I0906 15:59:32.623580   38636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:59:32.623645   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.688184   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.688365   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.688423   38636 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:59:32.811885   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:59:32.811975   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.875508   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.875661   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.875674   38636 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:59:32.994163   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.994185   38636 machine.go:91] provisioned docker machine in 1.317820355s
	I0906 15:59:32.994196   38636 start.go:300] post-start starting for "embed-certs-20220906155821-22187" (driver="docker")
	I0906 15:59:32.994202   38636 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:59:32.994271   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:59:32.994324   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.059474   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.140744   38636 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:59:33.144225   38636 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:59:33.144240   38636 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:59:33.144246   38636 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:59:33.144251   38636 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:59:33.144259   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:59:33.144377   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:59:33.144520   38636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:59:33.144661   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:59:33.151919   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:33.171420   38636 start.go:303] post-start completed in 177.213688ms
	I0906 15:59:33.171494   38636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:59:33.171543   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.236286   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.315015   38636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:59:33.319490   38636 fix.go:57] fixHost completed within 2.252593148s
	I0906 15:59:33.319503   38636 start.go:83] releasing machines lock for "embed-certs-20220906155821-22187", held for 2.252628285s
	I0906 15:59:33.319576   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:33.383050   38636 ssh_runner.go:195] Run: systemctl --version
	I0906 15:59:33.383109   38636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:59:33.383135   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.383168   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.450261   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.450290   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.581030   38636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:59:33.590993   38636 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:59:33.591044   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:59:33.602299   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:59:33.615635   38636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:59:33.686986   38636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:59:33.757095   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:33.825045   38636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:59:34.060910   38636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:59:34.126849   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:34.192180   38636 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:59:34.202955   38636 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:59:34.203017   38636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:59:34.206437   38636 start.go:471] Will wait 60s for crictl version
	I0906 15:59:34.206478   38636 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:59:34.302591   38636 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:59:34.302665   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.337107   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.413758   38636 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:59:34.413920   38636 cli_runner.go:164] Run: docker exec -t embed-certs-20220906155821-22187 dig +short host.docker.internal
	I0906 15:59:34.525925   38636 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:59:34.526040   38636 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:59:34.530030   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.539714   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:34.603049   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:34.603134   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.633537   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.633555   38636 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:59:34.633621   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.664984   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.665007   38636 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:59:34.665091   38636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:59:34.744509   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:34.744522   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:34.744536   38636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:59:34.744551   38636 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220906155821-22187 NodeName:embed-certs-20220906155821-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:59:34.744685   38636 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220906155821-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:59:34.744775   38636 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220906155821-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:59:34.744831   38636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:59:34.752036   38636 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:59:34.752086   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:59:34.758799   38636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0906 15:59:34.770909   38636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:59:34.782836   38636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0906 15:59:34.795526   38636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:59:34.799185   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.808319   38636 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187 for IP: 192.168.76.2
	I0906 15:59:34.808436   38636 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:59:34.808488   38636 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:59:34.808571   38636 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/client.key
	I0906 15:59:34.808633   38636 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key.31bdca25
	I0906 15:59:34.808689   38636 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key
	I0906 15:59:34.808881   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:59:34.808918   38636 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:59:34.808930   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:59:34.808969   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:59:34.809000   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:59:34.809031   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:59:34.809090   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:34.809639   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:59:34.826558   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:59:34.842729   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:59:34.859199   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:59:34.875553   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:59:34.892683   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:59:34.909267   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:59:34.925586   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:59:34.943279   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:59:34.960570   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:59:34.976829   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:59:34.993916   38636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:59:35.006394   38636 ssh_runner.go:195] Run: openssl version
	I0906 15:59:35.011296   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:59:35.019183   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023061   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023103   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.028251   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:59:35.035345   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:59:35.042841   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046567   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046608   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.051690   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:59:35.060553   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:59:35.068394   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072508   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072548   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.078010   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:59:35.085338   38636 kubeadm.go:396] StartCluster: {Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:35.085441   38636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:35.114198   38636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:59:35.121678   38636 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:59:35.121695   38636 kubeadm.go:627] restartCluster start
	I0906 15:59:35.121742   38636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:59:35.129021   38636 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.129082   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:35.193199   38636 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220906155821-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:35.193376   38636 kubeconfig.go:127] "embed-certs-20220906155821-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:59:35.193711   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:59:35.195111   38636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:59:35.203811   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.203867   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.212091   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.413063   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.413147   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.423469   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.613039   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.613124   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.622019   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.812186   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.812267   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.821025   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.013432   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.013565   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.023339   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.212268   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.212352   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.220885   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.412199   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.412282   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.421519   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.612305   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.612379   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.621617   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.812269   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.812442   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.821913   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.012008   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.012110   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.021439   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.212257   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.212414   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.221560   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.412154   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.412213   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.421151   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.611593   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.611679   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.620601   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.813302   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.813472   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.822723   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.013156   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.013257   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.023237   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.212440   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.212572   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.221850   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.221859   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.221904   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.229570   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.229582   38636 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:59:38.229589   38636 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:59:38.229646   38636 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:38.258980   38636 docker.go:443] Stopping containers: [3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d]
	I0906 15:59:38.259054   38636 ssh_runner.go:195] Run: docker stop 3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d
	I0906 15:59:38.288935   38636 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:59:38.298782   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:59:38.306417   38636 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Sep  6 22:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:58 /etc/kubernetes/scheduler.conf
	
	I0906 15:59:38.306467   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:59:38.313578   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:59:38.320753   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.327712   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.327753   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.334398   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:59:38.341325   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.341375   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:59:38.349241   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356713   38636 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356727   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:38.408089   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.277607   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.401052   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.451457   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.539398   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:59:39.539455   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.047870   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.548175   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.608683   38636 api_server.go:71] duration metric: took 1.069984323s to wait for apiserver process to appear ...
	I0906 15:59:40.608708   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:59:40.608729   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:40.609867   38636 api_server.go:256] stopped: https://127.0.0.1:60239/healthz: Get "https://127.0.0.1:60239/healthz": EOF
	I0906 15:59:41.110592   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:43.701073   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0906 15:59:43.701130   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0906 15:59:44.108296   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.115415   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.115431   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:44.608093   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.613832   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.613847   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:45.107569   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:45.113794   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 15:59:45.120558   38636 api_server.go:140] control plane version: v1.25.0
	I0906 15:59:45.120569   38636 api_server.go:130] duration metric: took 4.51431829s to wait for apiserver health ...
	I0906 15:59:45.120576   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:45.120585   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:45.120601   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:59:45.128405   38636 system_pods.go:59] 8 kube-system pods found
	I0906 15:59:45.128423   38636 system_pods.go:61] "coredns-565d847f94-5frt9" [0228f046-b179-4812-a7e5-c83cecc89e27] Running
	I0906 15:59:45.128429   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [c2de4fd6-a0ae-4f47-85de-74bcc70bdb2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:59:45.128433   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [0d53a9a2-f2dc-45fa-bce1-519c55da2cc4] Running
	I0906 15:59:45.128438   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [7cbb7baa-b9f1-4603-a7b9-8048df17b8dd] Running
	I0906 15:59:45.128443   38636 system_pods.go:61] "kube-proxy-zss4k" [f1dfb3a5-6fa4-48cf-95fa-0132b1ec5c8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:59:45.128448   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [f8ba94d8-2b42-4733-b705-bc6af0b91d1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:59:45.128453   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-cdg6d" [65746fe5-91aa-47c8-a8b4-d4a67f749ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:59:45.128456   38636 system_pods.go:61] "storage-provisioner" [13ae32f7-198b-4787-8687-aa39b2729274] Running
	I0906 15:59:45.128460   38636 system_pods.go:74] duration metric: took 7.85832ms to wait for pod list to return data ...
	I0906 15:59:45.128467   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:59:45.131418   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:59:45.131433   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 15:59:45.131442   38636 node_conditions.go:105] duration metric: took 2.974231ms to run NodePressure ...
	I0906 15:59:45.131454   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:45.310869   38636 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315021   38636 kubeadm.go:778] kubelet initialised
	I0906 15:59:45.315032   38636 kubeadm.go:779] duration metric: took 4.153612ms waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315041   38636 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:59:45.320463   38636 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326126   38636 pod_ready.go:92] pod "coredns-565d847f94-5frt9" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:45.326135   38636 pod_ready.go:81] duration metric: took 5.66283ms waiting for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326141   38636 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:47.335090   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:49.334484   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:51.337017   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:52.335838   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.335849   38636 pod_ready.go:81] duration metric: took 7.012332045s waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.335855   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.339996   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.340004   38636 pod_ready.go:81] duration metric: took 4.146291ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.340010   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:54.351029   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:56.848497   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:58.850674   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:59.347750   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.347764   38636 pod_ready.go:81] duration metric: took 7.009427345s waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.347771   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351913   38636 pod_ready.go:92] pod "kube-proxy-zss4k" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.351921   38636 pod_ready.go:81] duration metric: took 4.135355ms waiting for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351927   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356071   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.356080   38636 pod_ready.go:81] duration metric: took 4.1483ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356087   38636 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	I0906 16:00:01.365786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:03.365913   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:05.864397   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:07.865924   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:10.365936   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:12.864158   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:14.864836   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:16.865572   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:19.366603   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:21.863612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:23.865028   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:26.363858   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:28.364294   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:30.366125   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:32.865447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:35.362385   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:37.364530   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:39.863069   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:41.864919   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:44.363145   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:46.366591   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:48.863143   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:50.866878   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:53.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:55.364778   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:57.862437   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:59.863334   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:02.363223   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:04.864534   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:07.363948   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:09.862744   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:11.864192   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:14.364619   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:16.365257   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:18.864438   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:21.362761   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:23.364003   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:25.365931   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:27.862946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:29.864228   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:32.362786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:34.863359   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:37.365906   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:39.863888   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:42.362860   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:44.862363   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:46.864406   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:48.864866   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:50.866596   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:53.363229   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:55.864354   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:58.362250   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:00.862470   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:02.863209   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:04.864281   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:07.363645   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:09.364150   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:11.864765   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:13.865201   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:16.363299   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:18.862729   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:21.365287   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:23.865162   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:26.363102   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:28.363739   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:30.863089   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:32.863103   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:34.863473   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:36.863492   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:39.362249   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:41.364199   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:43.866447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:46.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:48.363997   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:50.861977   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:52.867206   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:55.363783   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:57.364091   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:59.863017   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:01.866522   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:04.364983   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:06.862786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:08.864389   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:11.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:13.863197   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:16.364032   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:18.365612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:20.365946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:22.864232   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:25.362338   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:27.862126   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:29.863682   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:31.863972   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:33.865141   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:36.363045   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:38.865132   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:41.364203   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:43.863753   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:46.362812   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:48.864502   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:50.864576   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:53.363874   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:55.864828   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:58.362706   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:59.356938   38636 pod_ready.go:81] duration metric: took 4m0.004474184s waiting for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	E0906 16:03:59.356974   38636 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 16:03:59.356999   38636 pod_ready.go:38] duration metric: took 4m14.04989418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:03:59.357025   38636 kubeadm.go:631] restartCluster took 4m24.248696346s
	W0906 16:03:59.357127   38636 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 16:03:59.357149   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 16:04:03.698932   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.341781129s)
	I0906 16:04:03.698999   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:03.708822   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:04:03.716300   38636 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 16:04:03.716346   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:04:03.724386   38636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:04:03.724421   38636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 16:04:03.767530   38636 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 16:04:03.767567   38636 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 16:04:03.863194   38636 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:04:03.863313   38636 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:04:03.863392   38636 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:04:03.985091   38636 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:04:04.009873   38636 out.go:204]   - Generating certificates and keys ...
	I0906 16:04:04.009938   38636 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 16:04:04.010013   38636 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 16:04:04.010092   38636 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 16:04:04.010151   38636 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 16:04:04.010224   38636 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 16:04:04.010326   38636 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 16:04:04.010382   38636 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 16:04:04.010428   38636 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 16:04:04.010506   38636 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 16:04:04.010568   38636 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 16:04:04.010599   38636 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 16:04:04.010644   38636 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:04:04.112141   38636 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:04:04.428252   38636 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:04:04.781321   38636 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:04:04.891466   38636 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:04:04.902953   38636 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:04:04.903733   38636 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:04:04.903840   38636 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 16:04:04.989147   38636 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:04:05.010782   38636 out.go:204]   - Booting up control plane ...
	I0906 16:04:05.010866   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:04:05.010943   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:04:05.011017   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:04:05.011077   38636 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:04:05.011220   38636 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:04:10.494832   38636 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503264 seconds
	I0906 16:04:10.494909   38636 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:04:10.501767   38636 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:04:11.013788   38636 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:04:11.013935   38636 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220906155821-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:04:11.519763   38636 kubeadm.go:317] [bootstrap-token] Using token: fqw8zb.b3unh498onihp969
	I0906 16:04:11.556084   38636 out.go:204]   - Configuring RBAC rules ...
	I0906 16:04:11.556186   38636 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:04:11.556258   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:04:11.595414   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:04:11.597593   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:04:11.600071   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:04:11.602066   38636 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:04:11.608914   38636 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:04:11.744220   38636 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 16:04:11.927532   38636 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 16:04:11.936157   38636 kubeadm.go:317] 
	I0906 16:04:11.936239   38636 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 16:04:11.936251   38636 kubeadm.go:317] 
	I0906 16:04:11.936347   38636 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 16:04:11.936360   38636 kubeadm.go:317] 
	I0906 16:04:11.936397   38636 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 16:04:11.936483   38636 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:04:11.936535   38636 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:04:11.936545   38636 kubeadm.go:317] 
	I0906 16:04:11.936592   38636 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 16:04:11.936601   38636 kubeadm.go:317] 
	I0906 16:04:11.936648   38636 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:04:11.936660   38636 kubeadm.go:317] 
	I0906 16:04:11.936721   38636 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 16:04:11.936790   38636 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:04:11.936860   38636 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:04:11.936870   38636 kubeadm.go:317] 
	I0906 16:04:11.936973   38636 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:04:11.937041   38636 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 16:04:11.937049   38636 kubeadm.go:317] 
	I0906 16:04:11.937130   38636 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937205   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 16:04:11.937225   38636 kubeadm.go:317] 	--control-plane 
	I0906 16:04:11.937230   38636 kubeadm.go:317] 
	I0906 16:04:11.937297   38636 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:04:11.937303   38636 kubeadm.go:317] 
	I0906 16:04:11.937368   38636 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937490   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 16:04:11.940643   38636 kubeadm.go:317] W0906 23:04:03.783659    7834 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 16:04:11.940759   38636 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 16:04:11.940841   38636 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 16:04:11.940910   38636 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:04:11.940926   38636 cni.go:95] Creating CNI manager for ""
	I0906 16:04:11.940937   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:04:11.940954   38636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:04:11.941016   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:11.941027   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=embed-certs-20220906155821-22187 minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.053740   38636 ops.go:34] apiserver oom_adj: -16
	I0906 16:04:12.053787   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.629790   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.129829   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.630701   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.129844   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.629847   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.129938   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.630450   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.129967   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.629971   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.130355   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.631117   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.130189   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.630017   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.131937   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.630247   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.130104   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.630932   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.129928   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.630617   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.129774   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.629879   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.129817   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.631908   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.129837   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.629870   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.693469   38636 kubeadm.go:1046] duration metric: took 12.752546325s to wait for elevateKubeSystemPrivileges.
	I0906 16:04:24.693487   38636 kubeadm.go:398] StartCluster complete in 4m49.621602402s
	I0906 16:04:24.693510   38636 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:24.693618   38636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 16:04:24.694416   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:25.209438   38636 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220906155821-22187" rescaled to 1
	I0906 16:04:25.209475   38636 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:04:25.209488   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:04:25.209543   38636 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 16:04:25.248550   38636 out.go:177] * Verifying Kubernetes components...
	I0906 16:04:25.209701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 16:04:25.248613   38636 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248614   38636 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248617   38636 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248621   38636 addons.go:65] Setting dashboard=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.274065   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:04:25.323012   38636 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323027   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:25.323031   38636 addons.go:153] Setting addon dashboard=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323035   38636 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323041   38636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220906155821-22187"
	W0906 16:04:25.349810   38636 addons.go:162] addon storage-provisioner should already be in state true
	W0906 16:04:25.349817   38636 addons.go:162] addon metrics-server should already be in state true
	W0906 16:04:25.349808   38636 addons.go:162] addon dashboard should already be in state true
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350008   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350278   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351712   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351778   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351905   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.372800   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.479636   38636 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.537415   38636 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 16:04:25.500699   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 16:04:25.537466   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 16:04:25.579923   38636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:04:25.616492   38636 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.580057   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.618390   38636 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.675937   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 16:04:25.634198   38636 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.675960   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 16:04:25.654052   38636 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:25.676027   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0906 16:04:25.675946   38636 addons.go:162] addon default-storageclass should already be in state true
	I0906 16:04:25.676093   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676134   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676180   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.680582   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.694583   38636 node_ready.go:49] node "embed-certs-20220906155821-22187" has status "Ready":"True"
	I0906 16:04:25.694606   38636 node_ready.go:38] duration metric: took 18.642476ms waiting for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.694617   38636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:25.703428   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:25.769082   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.770815   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.771641   38636 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:25.771655   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:04:25.771721   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.771828   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.846515   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.908743   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 16:04:25.908759   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 16:04:25.923614   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:26.012628   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 16:04:26.012643   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 16:04:26.093532   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 16:04:26.093544   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 16:04:26.107106   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:26.111721   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 16:04:26.111737   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 16:04:26.197860   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 16:04:26.197879   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 16:04:26.222994   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 16:04:26.223005   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 16:04:26.290198   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.290219   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 16:04:26.306943   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 16:04:26.306956   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 16:04:26.389305   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.404625   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 16:04:26.404642   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 16:04:26.502869   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 16:04:26.502883   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 16:04:26.586788   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 16:04:26.586801   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 16:04:26.602971   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.602986   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 16:04:26.687833   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.989360   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639629341s)
	I0906 16:04:26.989402   38636 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 16:04:27.019123   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095487172s)
	I0906 16:04:27.105458   38636 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:27.721184   38636 pod_ready.go:92] pod "coredns-565d847f94-7hgsh" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:27.721200   38636 pod_ready.go:81] duration metric: took 2.017760025s waiting for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.721212   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.884983   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.197113945s)
	I0906 16:04:27.919906   38636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 16:04:27.956698   38636 addons.go:414] enableAddons completed in 2.747190456s
	I0906 16:04:29.734002   38636 pod_ready.go:102] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"False"
	I0906 16:04:30.232781   38636 pod_ready.go:92] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.232795   38636 pod_ready.go:81] duration metric: took 2.511583495s waiting for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.232802   38636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241018   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.241028   38636 pod_ready.go:81] duration metric: took 8.220934ms waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241036   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246347   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.246358   38636 pod_ready.go:81] duration metric: took 5.317921ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246365   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.251178   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.271910   38636 pod_ready.go:81] duration metric: took 25.535498ms waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.271928   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278165   38636 pod_ready.go:92] pod "kube-proxy-k97f9" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.278179   38636 pod_ready.go:81] duration metric: took 6.242796ms waiting for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278197   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630702   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.630713   38636 pod_ready.go:81] duration metric: took 352.505269ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630719   38636 pod_ready.go:38] duration metric: took 4.93610349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:30.630735   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 16:04:30.630784   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:04:30.645666   38636 api_server.go:71] duration metric: took 5.436188155s to wait for apiserver process to appear ...
	I0906 16:04:30.645679   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 16:04:30.645686   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 16:04:30.651159   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 16:04:30.652511   38636 api_server.go:140] control plane version: v1.25.0
	I0906 16:04:30.652524   38636 api_server.go:130] duration metric: took 6.840548ms to wait for apiserver health ...
	I0906 16:04:30.652530   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:04:30.833833   38636 system_pods.go:59] 9 kube-system pods found
	I0906 16:04:30.833849   38636 system_pods.go:61] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:30.833853   38636 system_pods.go:61] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:30.833859   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:30.833862   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:30.833867   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:30.833872   38636 system_pods.go:61] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:30.833878   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:30.833885   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:30.833893   38636 system_pods.go:61] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:30.833900   38636 system_pods.go:74] duration metric: took 181.366286ms to wait for pod list to return data ...
	I0906 16:04:30.833906   38636 default_sa.go:34] waiting for default service account to be created ...
	I0906 16:04:31.030564   38636 default_sa.go:45] found service account: "default"
	I0906 16:04:31.030579   38636 default_sa.go:55] duration metric: took 196.655364ms for default service account to be created ...
	I0906 16:04:31.030585   38636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 16:04:31.234390   38636 system_pods.go:86] 9 kube-system pods found
	I0906 16:04:31.234405   38636 system_pods.go:89] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:31.234410   38636 system_pods.go:89] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:31.234413   38636 system_pods.go:89] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:31.234417   38636 system_pods.go:89] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:31.234427   38636 system_pods.go:89] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:31.234434   38636 system_pods.go:89] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:31.234438   38636 system_pods.go:89] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:31.234445   38636 system_pods.go:89] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:31.234449   38636 system_pods.go:89] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:31.234455   38636 system_pods.go:126] duration metric: took 203.86794ms to wait for k8s-apps to be running ...
	I0906 16:04:31.234461   38636 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 16:04:31.234511   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:31.244461   38636 system_svc.go:56] duration metric: took 9.993449ms WaitForService to wait for kubelet.
	I0906 16:04:31.244474   38636 kubeadm.go:573] duration metric: took 6.035000594s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 16:04:31.244487   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:04:31.430989   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 16:04:31.431001   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 16:04:31.431008   38636 node_conditions.go:105] duration metric: took 186.51865ms to run NodePressure ...
	I0906 16:04:31.431017   38636 start.go:216] waiting for startup goroutines ...
	I0906 16:04:31.467536   38636 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 16:04:31.509529   38636 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220906155821-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:59:31 UTC, end at Tue 2022-09-06 23:05:25 UTC. --
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.816983452Z" level=info msg="ignoring event" container=92e504425015f6694b5193e4bff39b519a743107dfe95a14ee00c69e7659392e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.887031873Z" level=info msg="ignoring event" container=3e9526bd593d6fa19262a77ffcf3e3e9d0614b9c989c1be95d61d2094bbc89d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:02 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:02.960416254Z" level=info msg="ignoring event" container=9a3eb83394a58bea8d623e5fdd50ad61f681fa2c52f2ac2eb8defbd91e0c958f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.033331515Z" level=info msg="ignoring event" container=39060d60adc24b1d4987133edd9d517608ae65bb59235c4db759c241fe0823fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.099853416Z" level=info msg="ignoring event" container=68d4bd8cb2b7d172bcfd32b932276ca792fbd5929171959d0240f45f614d9eb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.203316804Z" level=info msg="ignoring event" container=dd083af036b897cf9192632b86d9024e5b22472b36eb89c3b9fd96e92a7bc5c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.266198047Z" level=info msg="ignoring event" container=f9761cec1fd49649716fe3be0102bc9455aa518242b895dbf3f0c53079c001a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:03 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:03.387062809Z" level=info msg="ignoring event" container=9d4dd2ec1fae9c25cffc8d9dac79dc346255ec432f0a0cee71a57e269e90e450 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.642540605Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.642583450Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:27 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:27.644792187Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:28 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:28.931409174Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Sep 06 23:04:32 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:32.245174109Z" level=info msg="ignoring event" container=dd948dba4b0a98f82d36fe2ed92ec89b34d46cb38c394adc68af4a561a49d2d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:32 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:32.427411636Z" level=info msg="ignoring event" container=da872e1e0000e6fac34dde89ae3e13635ae0f1746dcfcc8d219d156d94bbb3ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:35 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:35.925164190Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 23:04:36 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:36.898675727Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.911233810Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.911281852Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:42 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:42.934977986Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:04:43 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:43.127858961Z" level=info msg="ignoring event" container=3773d6d6cea7a35b5c686a8b34f8501e8af1b635790d0f4ada7c95aa70fa8fac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:04:43 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:04:43.435681470Z" level=info msg="ignoring event" container=58253699cbd5ccc905b37fd1b4b3755af528d8a40b8ebb64352034aff9210ffe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:05:23.340870018Z" level=info msg="ignoring event" container=1d9990920379075e7f768d8664797acfa5c5a6a38995d09eb17227b11a87c59c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:05:23.746883384Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:05:23.746925331Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 dockerd[550]: time="2022-09-06T23:05:23.747981511Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	1d99909203790       a90209bb39e3d                                                                                    2 seconds ago        Exited              dashboard-metrics-scraper   2                   f98c8b1cd8708
	a5ce9d4ee1154       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   52 seconds ago       Running             kubernetes-dashboard        0                   14bd27279d043
	0bf6434f44ba0       6e38f40d628db                                                                                    58 seconds ago       Running             storage-provisioner         0                   11dc8deb9ec05
	245918d1156c9       5185b96f0becf                                                                                    59 seconds ago       Running             coredns                     0                   e8a490cda9647
	488f4bb96fbdc       58a9a0c6d96f2                                                                                    About a minute ago   Running             kube-proxy                  0                   ea6c7e3349e2c
	360ee6cb836eb       a8a176a5d5d69                                                                                    About a minute ago   Running             etcd                        0                   8c803c85c284b
	2c5c4dd4599e7       1a54c86c03a67                                                                                    About a minute ago   Running             kube-controller-manager     0                   05c8c47bf18f9
	a6ab282f9e2e8       bef2cf3115095                                                                                    About a minute ago   Running             kube-scheduler              0                   a40cdf01a8a30
	2a62b90be79e6       4d2edfd10d3e3                                                                                    About a minute ago   Running             kube-apiserver              0                   8b1c38051a5bb
	
	* 
	* ==> coredns [245918d1156c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220906155821-22187
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220906155821-22187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4
	                    minikube.k8s.io/name=embed-certs-20220906155821-22187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700
	                    minikube.k8s.io/version=v1.26.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Sep 2022 23:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220906155821-22187
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Sep 2022 23:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:04:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Sep 2022 23:05:19 +0000   Tue, 06 Sep 2022 23:05:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220906155821-22187
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fa1fae1e124a5b870c936a51ffb740
	  System UUID:                1dba8370-9279-4ea7-9dc1-f6d32eb7589f
	  Boot ID:                    7fe69b84-e343-4ef9-a748-f28e41202905
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.25.0
	  Kube-Proxy Version:         v1.25.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-hwccr                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     61s
	  kube-system                 etcd-embed-certs-20220906155821-22187                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-embed-certs-20220906155821-22187             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-embed-certs-20220906155821-22187    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-k97f9                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-embed-certs-20220906155821-22187             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 metrics-server-5c8fd5cf8-xq9zv                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-7b94984548-zz2mf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        kubernetes-dashboard-54596f475f-8dtl6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x3 over 80s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x3 over 80s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x2 over 80s)  kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     74s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                74s                kubelet          Node embed-certs-20220906155821-22187 status is now: NodeReady
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           62s                node-controller  Node embed-certs-20220906155821-22187 event: Registered Node embed-certs-20220906155821-22187 in Controller
	  Normal  Starting                 6s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeNotReady
	  Normal  NodeReady                6s                 kubelet          Node embed-certs-20220906155821-22187 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [360ee6cb836e] <==
	* {"level":"info","ts":"2022-09-06T23:04:06.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-09-06T23:04:06.305Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-06T23:04:06.305Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-09-06T23:04:06.950Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220906155821-22187 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-06T23:04:06.950Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.951Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-06T23:04:06.952Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-06T23:04:06.999Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-09-06T23:04:25.631Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"134.315816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-09-06T23:04:25.631Z","caller":"traceutil/trace.go:171","msg":"trace[1816448117] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:359; }","duration":"134.435927ms","start":"2022-09-06T23:04:25.497Z","end":"2022-09-06T23:04:25.631Z","steps":["trace[1816448117] 'range keys from in-memory index tree'  (duration: 134.056276ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-06T23:04:25.631Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.509473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2022-09-06T23:04:25.631Z","caller":"traceutil/trace.go:171","msg":"trace[581965917] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:359; }","duration":"102.869659ms","start":"2022-09-06T23:04:25.528Z","end":"2022-09-06T23:04:25.631Z","steps":["trace[581965917] 'range keys from in-memory index tree'  (duration: 102.423669ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:05:26 up  1:21,  0 users,  load average: 1.33, 1.08, 1.04
	Linux embed-certs-20220906155821-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2a62b90be79e] <==
	* I0906 23:04:09.955205       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 23:04:09.955234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 23:04:10.213112       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 23:04:10.236038       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 23:04:10.385781       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 23:04:10.389292       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0906 23:04:10.389924       1 controller.go:616] quota admission added evaluator for: endpoints
	I0906 23:04:10.392591       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 23:04:10.985631       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0906 23:04:11.742709       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0906 23:04:11.747703       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 23:04:11.753774       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0906 23:04:11.821476       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 23:04:24.243969       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0906 23:04:24.443378       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0906 23:04:27.114275       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.98.105.71]
	I0906 23:04:27.833430       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.113.30]
	I0906 23:04:27.843694       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.102.183.107]
	W0906 23:04:27.944072       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 23:04:27.944112       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0906 23:04:27.944117       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 23:04:27.944155       1 handler_proxy.go:102] no RequestInfo found in the context
	E0906 23:04:27.944188       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 23:04:27.945223       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2c5c4dd4599e] <==
	* I0906 23:04:24.843280       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-7hgsh"
	I0906 23:04:24.846851       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-hwccr"
	I0906 23:04:24.862440       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-7hgsh"
	I0906 23:04:26.949316       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I0906 23:04:27.003188       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-xq9zv"
	I0906 23:04:27.718901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I0906 23:04:27.724386       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.729164       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.732492       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.732888       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.738609       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.738734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.740690       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-54596f475f to 1"
	I0906 23:04:27.746643       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.752720       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.758319       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.758529       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0906 23:04:27.798381       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0906 23:04:27.798454       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:04:27.798474       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.798485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:04:27.803077       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-8dtl6"
	I0906 23:04:27.841515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-zz2mf"
	E0906 23:05:18.959379       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0906 23:05:18.967107       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [488f4bb96fbd] <==
	* I0906 23:04:25.797060       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0906 23:04:25.797137       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0906 23:04:25.797173       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 23:04:25.896727       1 server_others.go:206] "Using iptables Proxier"
	I0906 23:04:25.896810       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0906 23:04:25.896826       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0906 23:04:25.896844       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0906 23:04:25.896882       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 23:04:25.897004       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 23:04:25.897181       1 server.go:661] "Version info" version="v1.25.0"
	I0906 23:04:25.897192       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:04:25.901687       1 config.go:317] "Starting service config controller"
	I0906 23:04:25.901719       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 23:04:25.901757       1 config.go:226] "Starting endpoint slice config controller"
	I0906 23:04:25.901762       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 23:04:25.904035       1 config.go:444] "Starting node config controller"
	I0906 23:04:25.907488       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 23:04:26.002846       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 23:04:26.002897       1 shared_informer.go:262] Caches are synced for service config
	I0906 23:04:26.008673       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a6ab282f9e2e] <==
	* W0906 23:04:09.001539       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.001655       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.001687       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:04:09.001698       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 23:04:09.001868       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 23:04:09.001699       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:04:09.001982       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:04:09.002004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 23:04:09.001747       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.002060       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.002022       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:04:09.002075       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 23:04:09.002326       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.002387       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.912865       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 23:04:09.912920       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 23:04:09.948704       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:09.948758       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:09.959930       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:04:09.959969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 23:04:10.057029       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:04:10.057070       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:04:10.092021       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:04:10.092061       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0906 23:04:10.294004       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:59:31 UTC, end at Tue 2022-09-06 23:05:26 UTC. --
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462360   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/076a5aac-3ba3-4dce-aa96-bcf6faa2dc24-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-zz2mf\" (UID: \"076a5aac-3ba3-4dce-aa96-bcf6faa2dc24\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-zz2mf"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462378   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd-tmp\") pod \"storage-provisioner\" (UID: \"1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd\") " pod="kube-system/storage-provisioner"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462400   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tds2w\" (UniqueName: \"kubernetes.io/projected/1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd-kube-api-access-tds2w\") pod \"storage-provisioner\" (UID: \"1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd\") " pod="kube-system/storage-provisioner"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462573   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/73f275fe-7d42-400b-ad93-df387c9ed53d-tmp-dir\") pod \"metrics-server-5c8fd5cf8-xq9zv\" (UID: \"73f275fe-7d42-400b-ad93-df387c9ed53d\") " pod="kube-system/metrics-server-5c8fd5cf8-xq9zv"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462677   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctwq\" (UniqueName: \"kubernetes.io/projected/b44daf06-cea8-4179-b626-1a1e13fc9778-kube-api-access-gctwq\") pod \"kubernetes-dashboard-54596f475f-8dtl6\" (UID: \"b44daf06-cea8-4179-b626-1a1e13fc9778\") " pod="kubernetes-dashboard/kubernetes-dashboard-54596f475f-8dtl6"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462714   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14797c46-59df-423f-9376-8faa955f2426-config-volume\") pod \"coredns-565d847f94-hwccr\" (UID: \"14797c46-59df-423f-9376-8faa955f2426\") " pod="kube-system/coredns-565d847f94-hwccr"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462765   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36966060-5270-424c-a005-81413d70656a-lib-modules\") pod \"kube-proxy-k97f9\" (UID: \"36966060-5270-424c-a005-81413d70656a\") " pod="kube-system/kube-proxy-k97f9"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462824   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv49b\" (UniqueName: \"kubernetes.io/projected/14797c46-59df-423f-9376-8faa955f2426-kube-api-access-cv49b\") pod \"coredns-565d847f94-hwccr\" (UID: \"14797c46-59df-423f-9376-8faa955f2426\") " pod="kube-system/coredns-565d847f94-hwccr"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462855   10973 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4m67\" (UniqueName: \"kubernetes.io/projected/73f275fe-7d42-400b-ad93-df387c9ed53d-kube-api-access-d4m67\") pod \"metrics-server-5c8fd5cf8-xq9zv\" (UID: \"73f275fe-7d42-400b-ad93-df387c9ed53d\") " pod="kube-system/metrics-server-5c8fd5cf8-xq9zv"
	Sep 06 23:05:20 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:20.462881   10973 reconciler.go:169] "Reconciler: start to sync state"
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:21.621794   10973 request.go:601] Waited for 1.10350225s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:21.670689   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220906155821-22187"
	Sep 06 23:05:21 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:21.836740   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220906155821-22187"
	Sep 06 23:05:22 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:22.092013   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220906155821-22187\" already exists" pod="kube-system/etcd-embed-certs-20220906155821-22187"
	Sep 06 23:05:22 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:22.249561   10973 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220906155821-22187\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220906155821-22187"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:23.124660   10973 scope.go:115] "RemoveContainer" containerID="58253699cbd5ccc905b37fd1b4b3755af528d8a40b8ebb64352034aff9210ffe"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:23.544151   10973 scope.go:115] "RemoveContainer" containerID="1d9990920379075e7f768d8664797acfa5c5a6a38995d09eb17227b11a87c59c"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:23.544342   10973 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b94984548-zz2mf_kubernetes-dashboard(076a5aac-3ba3-4dce-aa96-bcf6faa2dc24)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-zz2mf" podUID=076a5aac-3ba3-4dce-aa96-bcf6faa2dc24
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:23.544554   10973 scope.go:115] "RemoveContainer" containerID="58253699cbd5ccc905b37fd1b4b3755af528d8a40b8ebb64352034aff9210ffe"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:23.748481   10973 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:23.748538   10973 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:23.748624   10973 kuberuntime_manager.go:862] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d4m67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:Probe
Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevice
s:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c8fd5cf8-xq9zv_kube-system(73f275fe-7d42-400b-ad93-df387c9ed53d): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 06 23:05:23 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:23.748672   10973 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c8fd5cf8-xq9zv" podUID=73f275fe-7d42-400b-ad93-df387c9ed53d
	Sep 06 23:05:24 embed-certs-20220906155821-22187 kubelet[10973]: I0906 23:05:24.552579   10973 scope.go:115] "RemoveContainer" containerID="1d9990920379075e7f768d8664797acfa5c5a6a38995d09eb17227b11a87c59c"
	Sep 06 23:05:24 embed-certs-20220906155821-22187 kubelet[10973]: E0906 23:05:24.552755   10973 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b94984548-zz2mf_kubernetes-dashboard(076a5aac-3ba3-4dce-aa96-bcf6faa2dc24)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-zz2mf" podUID=076a5aac-3ba3-4dce-aa96-bcf6faa2dc24
	
	* 
	* ==> kubernetes-dashboard [a5ce9d4ee115] <==
	* 2022/09/06 23:04:33 Using namespace: kubernetes-dashboard
	2022/09/06 23:04:33 Using in-cluster config to connect to apiserver
	2022/09/06 23:04:33 Using secret token for csrf signing
	2022/09/06 23:04:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/09/06 23:04:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/09/06 23:04:33 Successful initial request to the apiserver, version: v1.25.0
	2022/09/06 23:04:33 Generating JWE encryption key
	2022/09/06 23:04:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/09/06 23:04:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/09/06 23:04:34 Initializing JWE encryption key from synchronized object
	2022/09/06 23:04:34 Creating in-cluster Sidecar client
	2022/09/06 23:04:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 23:04:34 Serving insecurely on HTTP port: 9090
	2022/09/06 23:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/09/06 23:04:33 Starting overwatch
	
	* 
	* ==> storage-provisioner [0bf6434f44ba] <==
	* I0906 23:04:28.071934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:04:28.079661       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:04:28.079724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:04:28.084421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:04:28.084535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81!
	I0906 23:04:28.084514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18b8a35b-b458-4ed7-8e53-7663543ebb78", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81 became leader
	I0906 23:04:28.184842       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220906155821-22187_eb8b2fed-6779-444b-8beb-ed01bacb4e81!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c8fd5cf8-xq9zv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv: exit status 1 (56.447351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-xq9zv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220906155821-22187 describe pod metrics-server-5c8fd5cf8-xq9zv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (42.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:05:44.032734   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:06:24.317216   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:06:37.560186   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:07:41.118466   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 16:07:41.279977   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 16:07:47.095870   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:07:47.460486   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:08:37.709216   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:08:49.018241   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:09:10.507205   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:09:45.083993   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:09:56.182481   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 16:09:56.979494   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:10:00.524880   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/default-k8s-different-port-20220906154915-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:10:44.031149   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:11:24.314664   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0906 16:11:37.559333   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59560/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:12:41.116473   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:12:41.279019   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:12:47.094945   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:12:47.459500   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:13:37.710137   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0906 16:13:49.015374   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (409.862215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220906154143-22187" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220906154143-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220906154143-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.425µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220906154143-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220906154143-22187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220906154143-22187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8",
	        "Created": "2022-09-06T22:41:49.616534464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-06T22:47:29.039125207Z",
	            "FinishedAt": "2022-09-06T22:47:26.139154051Z"
	        },
	        "Image": "sha256:2ba71c3417619fdcfc963d836ce066d238b9a7120a650b2e3e1479172675dba7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/hosts",
	        "LogPath": "/var/lib/docker/containers/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8/3ccebcd496a258876daf9080330dc747b000e5ed2d46306dcee2b578da5dade8-json.log",
	        "Name": "/old-k8s-version-20220906154143-22187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220906154143-22187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220906154143-22187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447-init/diff:/var/lib/docker/overlay2/a562149d66f4eb8f5518f0ead57ae27ab583c1eeeb0d38f07f5396cd3866d815/diff:/var/lib/docker/overlay2/74eccebe6faed6975afb963d12613841faca02bf4d174485c963e2527c53a200/diff:/var/lib/docker/overlay2/0bdf5bc6b0a6ccd0e955f7ebf2bcfa87f9201bdf2c218bd47e6d1cd6025fb96b/diff:/var/lib/docker/overlay2/c4c2dd1586b51d6e8ca7a504a53ffbb8b2973fb0ddc21be2d58ba761552e32ff/diff:/var/lib/docker/overlay2/f5ff15396dfa63df0418175f14b4dd4abf0410a489aa00b18d5779478cbed022/diff:/var/lib/docker/overlay2/4f6df783c35248d9995096ab352c9bebd3d0c540232ed107971de794a28fcaf5/diff:/var/lib/docker/overlay2/fd71c2f32b76c099747ff260b8cd6a94172bf263f86463f1daf0764db4e83999/diff:/var/lib/docker/overlay2/496c52c2d5e01156bf5ff28fa60809272db59e3c59bafd30204f24fb08861446/diff:/var/lib/docker/overlay2/57deb25eee11fbfc14fd895c916e29970e206c2727688c054f27f0f25686fd55/diff:/var/lib/docker/overlay2/5a8433
204278b53d60d5f2b75b5aacd615ae7a0ebdd67a29ec13cd33f9853db9/diff:/var/lib/docker/overlay2/2932b2cd731955e5faf801c340b6e1022996064615e6ae972e6b293cd8b2fa51/diff:/var/lib/docker/overlay2/a0a1e1937feb64b0d7a5c9ac655ec573113780fdaaffc81cf0f4da5950c78f8a/diff:/var/lib/docker/overlay2/2e595f3b99c92e64209782201f20aff147f2c576dd2999efcc76f866eca52ddc/diff:/var/lib/docker/overlay2/464360d4c39f56fc8d6fa835135ac5814ef91437da753fdd4560797cd3b027eb/diff:/var/lib/docker/overlay2/83211c8e9021816fa8c23de95334bd655b68395bb92d7d61e12d7203dc3d714b/diff:/var/lib/docker/overlay2/a8d8fb2f88288922a9b0bf7943c62a3cfcc024a78581e37d5d3c3acc560f553b/diff:/var/lib/docker/overlay2/96c72ca78e29930d7154438af9871d4cfdc2e24aa532a6a6d3c76d25dcfb5eb9/diff:/var/lib/docker/overlay2/519ceecd99b3a0789bef5c7f67cf247268443d5309ba11fc4bd60f359a26e5fe/diff:/var/lib/docker/overlay2/f0b0ca5f04610107e34e6462cb9431d6bfb9cbd96cb632feb1b47b83e3b523e3/diff:/var/lib/docker/overlay2/015f27bc54118485988dfd1bac9b6d916497512d4c5c00053d2defd3844f397c/diff:/var/lib/d
ocker/overlay2/c7c3acdd1162eae501ece2f4a765e7277af9b67363596f1b616f62ec1ca1ad9f/diff:/var/lib/docker/overlay2/9d1323620e50a1dcedd43e2f57dd25e3968aa0a5ae54788552b9b82e8cacef60/diff:/var/lib/docker/overlay2/7aa7cb069fa8adbd0959a63f126ab99eb426108fec0a7a84fe851c4740adaa40/diff:/var/lib/docker/overlay2/071db635c09ef55a6c883833fe3d08a6fa405d0d24debb89e72f2878fd0abd7a/diff:/var/lib/docker/overlay2/987f7bbc210fec0b342c78f5e7a4c0cf6bbbe7d8799634e00a806f768c2d8d3c/diff:/var/lib/docker/overlay2/462fd8a072151e44657567c3ff2efe1faa8244e9848407d97306e551bb1454e8/diff:/var/lib/docker/overlay2/98707451f52f942875bf1a8e247c85fa0f1d1ee92784f52ceb6b096e2efdf533/diff:/var/lib/docker/overlay2/e5066a3945cc023c1629aa29bde0e437b188e70338451c71049bf3c33a7e555c/diff:/var/lib/docker/overlay2/14c2b9d6745644b40e95c6cc56ff6170d6c03ed111777658cbe2daac2730a6a9/diff:/var/lib/docker/overlay2/8479935d545eb59e481aaa679ab8f60b391464287762a8c90a5cffff477bb68d/diff:/var/lib/docker/overlay2/fb806fe43c96acd77d33e891e616cf29950417d95d9a5428b16f0bc908e
d5aa1/diff:/var/lib/docker/overlay2/4da1a1ac77f9d2641c6379794ff698c0af3eccf9c96f08f428548ae22b260b5e/diff:/var/lib/docker/overlay2/30fc26375d1ca954f0dc6ef93e7df2bfbf970493b4a8bc7e8df2ad8c1be420a0/diff:/var/lib/docker/overlay2/26af946e2832e6fc46a8fe67f66364e371a6dd8bb644a094d7a72f0e25037bdf/diff:/var/lib/docker/overlay2/1452a272fd05aa9fca5a7ec62b972f6a661d0bc955e4dfc63ef2ddf4fce7eceb/diff:/var/lib/docker/overlay2/a65b8c56de8c6974a72ff9eb3ccbdb9aae618ddcbbe1e8d65186965a884ef056/diff:/var/lib/docker/overlay2/ead0a1e4bdf1831cf3d67779cc393228e236145e457493de05388e10e77028d8/diff:/var/lib/docker/overlay2/6fd54f0af6de98ede514110cb94fbd23ea44c265aa2128c1d7f9fa973c21d1dc/diff:/var/lib/docker/overlay2/4e1c05ee18d705f9265e361ccd75b65824b8ea694cf8c94032cb15561a4e8e4e/diff:/var/lib/docker/overlay2/943d49f99d14345240a33491159a383efafbf57de90cf2766b7468b7ce9a7a15/diff:/var/lib/docker/overlay2/34acb6edcafe85cd91851d5c497b31d1aedd5724caa80176cec756b07cab5e88/diff:/var/lib/docker/overlay2/616725bf00ee410535fc74d0c2b833611f875f
36f0acd64b9a76b0d3949b9150/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ac547ea3e5ca47a66946b75ad2142ca777ca0c2891e5cf89e36574deede447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220906154143-22187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220906154143-22187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220906154143-22187",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220906154143-22187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a2118a2c36e1b5c44aafe44f5808c04fdc08f7c9c97617d0abe3804e5920b4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59556"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a2118a2c36e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220906154143-22187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ccebcd496a2",
	                        "old-k8s-version-20220906154143-22187"
	                    ],
	                    "NetworkID": "3e22c4664759861d82314ff89c941b324eadf283ebb8fd6949e8fc4ad4c9a041",
	                    "EndpointID": "b81530b6afb4e1c30b7c1e1d7bbcce0431a21d5b730d06b677fa03cd39f407d8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (408.265927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220906154143-22187 logs -n 25: (3.470697228s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:55 PDT | 06 Sep 22 15:55 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220906154915-22187 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | default-k8s-different-port-20220906154915-22187            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:56 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:56 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220906155618-22187 --memory=2200           | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.0              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:57 PDT | 06 Sep 22 15:57 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220906155618-22187                 | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | newest-cni-20220906155618-22187                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220906155820-22187      | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:58 PDT |
	|         | disable-driver-mounts-20220906155820-22187                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:58 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 15:59 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 15:59 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.0                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:04 PDT | 06 Sep 22 16:04 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:05 PDT | 06 Sep 22 16:05 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:05 PDT | 06 Sep 22 16:05 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220906155821-22187                | jenkins | v1.26.1 | 06 Sep 22 16:05 PDT | 06 Sep 22 16:05 PDT |
	|         | embed-certs-20220906155821-22187                           |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 15:59:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 15:59:30.262038   38636 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:59:30.262188   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262193   38636 out.go:309] Setting ErrFile to fd 2...
	I0906 15:59:30.262197   38636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:59:30.262308   38636 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:59:30.262744   38636 out.go:303] Setting JSON to false
	I0906 15:59:30.277675   38636 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10741,"bootTime":1662494429,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 15:59:30.277782   38636 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 15:59:30.299234   38636 out.go:177] * [embed-certs-20220906155821-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 15:59:30.341461   38636 notify.go:193] Checking for updates...
	I0906 15:59:30.363080   38636 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 15:59:30.384168   38636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:30.405458   38636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 15:59:30.426996   38636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 15:59:30.448360   38636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 15:59:30.470635   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:30.471106   38636 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 15:59:30.539352   38636 docker.go:137] docker version: linux-20.10.17
	I0906 15:59:30.539462   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.670843   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.614641007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.712577   38636 out.go:177] * Using the docker driver based on existing profile
	I0906 15:59:30.734837   38636 start.go:284] selected driver: docker
	I0906 15:59:30.734870   38636 start.go:808] validating driver "docker" against &{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.735025   38636 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 15:59:30.738354   38636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 15:59:30.869658   38636 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 22:59:30.81424686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 15:59:30.869799   38636 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 15:59:30.869818   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:30.869829   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:30.869843   38636 start_flags.go:310] config:
	{Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:30.912149   38636 out.go:177] * Starting control plane node embed-certs-20220906155821-22187 in cluster embed-certs-20220906155821-22187
	I0906 15:59:30.933415   38636 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 15:59:30.954429   38636 out.go:177] * Pulling base image ...
	I0906 15:59:31.001627   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:31.001689   38636 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 15:59:31.001724   38636 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 15:59:31.001744   38636 cache.go:57] Caching tarball of preloaded images
	I0906 15:59:31.001934   38636 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 15:59:31.001957   38636 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.0 on docker
	I0906 15:59:31.002893   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.066643   38636 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon, skipping pull
	I0906 15:59:31.066664   38636 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in daemon, skipping load
	I0906 15:59:31.066675   38636 cache.go:208] Successfully downloaded all kic artifacts
	I0906 15:59:31.066736   38636 start.go:364] acquiring machines lock for embed-certs-20220906155821-22187: {Name:mkf641e2928acfedb898f07b24fd168dccdc0551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 15:59:31.066861   38636 start.go:368] acquired machines lock for "embed-certs-20220906155821-22187" in 104.801µs
	I0906 15:59:31.066880   38636 start.go:96] Skipping create...Using existing machine configuration
	I0906 15:59:31.066891   38636 fix.go:55] fixHost starting: 
	I0906 15:59:31.067105   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.130023   38636 fix.go:103] recreateIfNeeded on embed-certs-20220906155821-22187: state=Stopped err=<nil>
	W0906 15:59:31.130050   38636 fix.go:129] unexpected machine state, will restart: <nil>
	I0906 15:59:31.173435   38636 out.go:177] * Restarting existing docker container for "embed-certs-20220906155821-22187" ...
	I0906 15:59:31.194813   38636 cli_runner.go:164] Run: docker start embed-certs-20220906155821-22187
	I0906 15:59:31.539043   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 15:59:31.604033   38636 kic.go:415] container "embed-certs-20220906155821-22187" state is running.
	I0906 15:59:31.604697   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:31.675958   38636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/config.json ...
	I0906 15:59:31.676353   38636 machine.go:88] provisioning docker machine ...
	I0906 15:59:31.676379   38636 ubuntu.go:169] provisioning hostname "embed-certs-20220906155821-22187"
	I0906 15:59:31.676439   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.744270   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.744484   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.744500   38636 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220906155821-22187 && echo "embed-certs-20220906155821-22187" | sudo tee /etc/hostname
	I0906 15:59:31.866514   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220906155821-22187
	
	I0906 15:59:31.866600   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:31.931384   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:31.931532   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:31.931548   38636 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220906155821-22187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220906155821-22187/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220906155821-22187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 15:59:32.043786   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.043809   38636 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube}
	I0906 15:59:32.043831   38636 ubuntu.go:177] setting up certificates
	I0906 15:59:32.043843   38636 provision.go:83] configureAuth start
	I0906 15:59:32.043910   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:32.109953   38636 provision.go:138] copyHostCerts
	I0906 15:59:32.110077   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem, removing ...
	I0906 15:59:32.110087   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem
	I0906 15:59:32.110175   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.pem (1082 bytes)
	I0906 15:59:32.110375   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem, removing ...
	I0906 15:59:32.110389   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem
	I0906 15:59:32.110445   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cert.pem (1123 bytes)
	I0906 15:59:32.110625   38636 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem, removing ...
	I0906 15:59:32.110632   38636 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem
	I0906 15:59:32.110688   38636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/key.pem (1675 bytes)
	I0906 15:59:32.110800   38636 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220906155821-22187 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220906155821-22187]
	I0906 15:59:32.234910   38636 provision.go:172] copyRemoteCerts
	I0906 15:59:32.234973   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 15:59:32.235024   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.301797   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:32.384511   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 15:59:32.404630   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0906 15:59:32.423185   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 15:59:32.442534   38636 provision.go:86] duration metric: configureAuth took 398.671593ms
	I0906 15:59:32.442548   38636 ubuntu.go:193] setting minikube options for container-runtime
	I0906 15:59:32.442701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:59:32.442763   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.508255   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.508405   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.508426   38636 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 15:59:32.623407   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 15:59:32.623421   38636 ubuntu.go:71] root file system type: overlay
	I0906 15:59:32.623580   38636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 15:59:32.623645   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.688184   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.688365   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.688423   38636 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 15:59:32.811885   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 15:59:32.811975   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:32.875508   38636 main.go:134] libmachine: Using SSH client type: native
	I0906 15:59:32.875661   38636 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5a40] 0x13e8bc0 <nil>  [] 0s} 127.0.0.1 60235 <nil> <nil>}
	I0906 15:59:32.875674   38636 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 15:59:32.994163   38636 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0906 15:59:32.994185   38636 machine.go:91] provisioned docker machine in 1.317820355s
	I0906 15:59:32.994196   38636 start.go:300] post-start starting for "embed-certs-20220906155821-22187" (driver="docker")
	I0906 15:59:32.994202   38636 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 15:59:32.994271   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 15:59:32.994324   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.059474   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.140744   38636 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 15:59:33.144225   38636 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 15:59:33.144240   38636 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 15:59:33.144246   38636 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 15:59:33.144251   38636 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0906 15:59:33.144259   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/addons for local assets ...
	I0906 15:59:33.144377   38636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files for local assets ...
	I0906 15:59:33.144520   38636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem -> 221872.pem in /etc/ssl/certs
	I0906 15:59:33.144661   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 15:59:33.151919   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:33.171420   38636 start.go:303] post-start completed in 177.213688ms
	I0906 15:59:33.171494   38636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:59:33.171543   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.236286   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.315015   38636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 15:59:33.319490   38636 fix.go:57] fixHost completed within 2.252593148s
	I0906 15:59:33.319503   38636 start.go:83] releasing machines lock for "embed-certs-20220906155821-22187", held for 2.252628285s
	I0906 15:59:33.319576   38636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220906155821-22187
	I0906 15:59:33.383050   38636 ssh_runner.go:195] Run: systemctl --version
	I0906 15:59:33.383109   38636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 15:59:33.383135   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.383168   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:33.450261   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.450290   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 15:59:33.581030   38636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 15:59:33.590993   38636 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0906 15:59:33.591044   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 15:59:33.602299   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 15:59:33.615635   38636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 15:59:33.686986   38636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 15:59:33.757095   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:33.825045   38636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 15:59:34.060910   38636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 15:59:34.126849   38636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 15:59:34.192180   38636 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0906 15:59:34.202955   38636 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 15:59:34.203017   38636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 15:59:34.206437   38636 start.go:471] Will wait 60s for crictl version
	I0906 15:59:34.206478   38636 ssh_runner.go:195] Run: sudo crictl version
	I0906 15:59:34.302591   38636 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0906 15:59:34.302665   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.337107   38636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 15:59:34.413758   38636 out.go:204] * Preparing Kubernetes v1.25.0 on Docker 20.10.17 ...
	I0906 15:59:34.413920   38636 cli_runner.go:164] Run: docker exec -t embed-certs-20220906155821-22187 dig +short host.docker.internal
	I0906 15:59:34.525925   38636 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0906 15:59:34.526040   38636 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0906 15:59:34.530030   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.539714   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:34.603049   38636 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 15:59:34.603134   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.633537   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.633555   38636 docker.go:542] Images already preloaded, skipping extraction
	I0906 15:59:34.633621   38636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 15:59:34.664984   38636 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.0
	registry.k8s.io/kube-controller-manager:v1.25.0
	registry.k8s.io/kube-scheduler:v1.25.0
	registry.k8s.io/kube-proxy:v1.25.0
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0906 15:59:34.665007   38636 cache_images.go:84] Images are preloaded, skipping loading
	I0906 15:59:34.665091   38636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 15:59:34.744509   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:34.744522   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:34.744536   38636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 15:59:34.744551   38636 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220906155821-22187 NodeName:embed-certs-20220906155821-22187 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0906 15:59:34.744685   38636 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220906155821-22187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 15:59:34.744775   38636 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220906155821-22187 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 15:59:34.744831   38636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
	I0906 15:59:34.752036   38636 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 15:59:34.752086   38636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 15:59:34.758799   38636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0906 15:59:34.770909   38636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 15:59:34.782836   38636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0906 15:59:34.795526   38636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0906 15:59:34.799185   38636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 15:59:34.808319   38636 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187 for IP: 192.168.76.2
	I0906 15:59:34.808436   38636 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key
	I0906 15:59:34.808488   38636 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key
	I0906 15:59:34.808571   38636 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/client.key
	I0906 15:59:34.808633   38636 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key.31bdca25
	I0906 15:59:34.808689   38636 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key
	I0906 15:59:34.808881   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem (1338 bytes)
	W0906 15:59:34.808918   38636 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187_empty.pem, impossibly tiny 0 bytes
	I0906 15:59:34.808930   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 15:59:34.808969   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/ca.pem (1082 bytes)
	I0906 15:59:34.809000   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/cert.pem (1123 bytes)
	I0906 15:59:34.809031   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/key.pem (1675 bytes)
	I0906 15:59:34.809090   38636 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem (1708 bytes)
	I0906 15:59:34.809639   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 15:59:34.826558   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 15:59:34.842729   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 15:59:34.859199   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/embed-certs-20220906155821-22187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 15:59:34.875553   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 15:59:34.892683   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 15:59:34.909267   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 15:59:34.925586   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 15:59:34.943279   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/certs/22187.pem --> /usr/share/ca-certificates/22187.pem (1338 bytes)
	I0906 15:59:34.960570   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/ssl/certs/221872.pem --> /usr/share/ca-certificates/221872.pem (1708 bytes)
	I0906 15:59:34.976829   38636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 15:59:34.993916   38636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 15:59:35.006394   38636 ssh_runner.go:195] Run: openssl version
	I0906 15:59:35.011296   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22187.pem && ln -fs /usr/share/ca-certificates/22187.pem /etc/ssl/certs/22187.pem"
	I0906 15:59:35.019183   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023061   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep  6 21:50 /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.023103   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22187.pem
	I0906 15:59:35.028251   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22187.pem /etc/ssl/certs/51391683.0"
	I0906 15:59:35.035345   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221872.pem && ln -fs /usr/share/ca-certificates/221872.pem /etc/ssl/certs/221872.pem"
	I0906 15:59:35.042841   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046567   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep  6 21:50 /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.046608   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221872.pem
	I0906 15:59:35.051690   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221872.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 15:59:35.060553   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 15:59:35.068394   38636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072508   38636 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep  6 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.072548   38636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 15:59:35.078010   38636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 15:59:35.085338   38636 kubeadm.go:396] StartCluster: {Name:embed-certs-20220906155821-22187 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:embed-certs-20220906155821-22187 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 15:59:35.085441   38636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:35.114198   38636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 15:59:35.121678   38636 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0906 15:59:35.121695   38636 kubeadm.go:627] restartCluster start
	I0906 15:59:35.121742   38636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 15:59:35.129021   38636 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.129082   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 15:59:35.193199   38636 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220906155821-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 15:59:35.193376   38636 kubeconfig.go:127] "embed-certs-20220906155821-22187" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig - will repair!
	I0906 15:59:35.193711   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 15:59:35.195111   38636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 15:59:35.203811   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.203867   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.212091   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.413063   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.413147   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.423469   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.613039   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.613124   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.622019   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:35.812186   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:35.812267   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:35.821025   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.013432   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.013565   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.023339   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.212268   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.212352   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.220885   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.412199   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.412282   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.421519   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.612305   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.612379   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.621617   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:36.812269   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:36.812442   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:36.821913   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.012008   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.012110   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.021439   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.212257   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.212414   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.221560   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.412154   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.412213   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.421151   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.611593   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.611679   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.620601   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:37.813302   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:37.813472   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:37.822723   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.013156   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.013257   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.023237   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.212440   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.212572   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.221850   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.221859   38636 api_server.go:165] Checking apiserver status ...
	I0906 15:59:38.221904   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 15:59:38.229570   38636 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.229582   38636 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0906 15:59:38.229589   38636 kubeadm.go:1093] stopping kube-system containers ...
	I0906 15:59:38.229646   38636 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 15:59:38.258980   38636 docker.go:443] Stopping containers: [3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d]
	I0906 15:59:38.259054   38636 ssh_runner.go:195] Run: docker stop 3ace43e3cdd0 fa4259ac8ae1 b10f76b0afab a46bff16a884 ac542e62f7da b9ad41cd6945 a33bf934daea 4f7a134f0b21 dfdc5f92562f d4f62ccab8af 48e63018d570 b925f58f7247 8753c7e8e889 cd1efc2e1d99 94326a96dd97 b67711366c6d
	I0906 15:59:38.288935   38636 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 15:59:38.298782   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 15:59:38.306417   38636 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 22:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 22:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Sep  6 22:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 22:58 /etc/kubernetes/scheduler.conf
	
	I0906 15:59:38.306467   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 15:59:38.313578   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 15:59:38.320753   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.327712   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.327753   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 15:59:38.334398   38636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 15:59:38.341325   38636 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:59:38.341375   38636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 15:59:38.349241   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356713   38636 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 15:59:38.356727   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:38.408089   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.277607   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.401052   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.451457   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:39.539398   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 15:59:39.539455   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.047870   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.548175   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:59:40.608683   38636 api_server.go:71] duration metric: took 1.069984323s to wait for apiserver process to appear ...
	I0906 15:59:40.608708   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 15:59:40.608729   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:40.609867   38636 api_server.go:256] stopped: https://127.0.0.1:60239/healthz: Get "https://127.0.0.1:60239/healthz": EOF
	I0906 15:59:41.110592   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:43.701073   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0906 15:59:43.701130   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0906 15:59:44.108296   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.115415   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.115431   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:44.608093   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:44.613832   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 15:59:44.613847   38636 api_server.go:102] status: https://127.0.0.1:60239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 15:59:45.107569   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 15:59:45.113794   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 15:59:45.120558   38636 api_server.go:140] control plane version: v1.25.0
	I0906 15:59:45.120569   38636 api_server.go:130] duration metric: took 4.51431829s to wait for apiserver health ...
	I0906 15:59:45.120576   38636 cni.go:95] Creating CNI manager for ""
	I0906 15:59:45.120585   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 15:59:45.120601   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 15:59:45.128405   38636 system_pods.go:59] 8 kube-system pods found
	I0906 15:59:45.128423   38636 system_pods.go:61] "coredns-565d847f94-5frt9" [0228f046-b179-4812-a7e5-c83cecc89e27] Running
	I0906 15:59:45.128429   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [c2de4fd6-a0ae-4f47-85de-74bcc70bdb2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 15:59:45.128433   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [0d53a9a2-f2dc-45fa-bce1-519c55da2cc4] Running
	I0906 15:59:45.128438   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [7cbb7baa-b9f1-4603-a7b9-8048df17b8dd] Running
	I0906 15:59:45.128443   38636 system_pods.go:61] "kube-proxy-zss4k" [f1dfb3a5-6fa4-48cf-95fa-0132b1ec5c8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 15:59:45.128448   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [f8ba94d8-2b42-4733-b705-bc6af0b91d1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 15:59:45.128453   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-cdg6d" [65746fe5-91aa-47c8-a8b4-d4a67f749ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 15:59:45.128456   38636 system_pods.go:61] "storage-provisioner" [13ae32f7-198b-4787-8687-aa39b2729274] Running
	I0906 15:59:45.128460   38636 system_pods.go:74] duration metric: took 7.85832ms to wait for pod list to return data ...
	I0906 15:59:45.128467   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 15:59:45.131418   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 15:59:45.131433   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 15:59:45.131442   38636 node_conditions.go:105] duration metric: took 2.974231ms to run NodePressure ...
	I0906 15:59:45.131454   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 15:59:45.310869   38636 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315021   38636 kubeadm.go:778] kubelet initialised
	I0906 15:59:45.315032   38636 kubeadm.go:779] duration metric: took 4.153612ms waiting for restarted kubelet to initialise ...
	I0906 15:59:45.315041   38636 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 15:59:45.320463   38636 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326126   38636 pod_ready.go:92] pod "coredns-565d847f94-5frt9" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:45.326135   38636 pod_ready.go:81] duration metric: took 5.66283ms waiting for pod "coredns-565d847f94-5frt9" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:45.326141   38636 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:47.335090   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:49.334484   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:51.337017   38636 pod_ready.go:102] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:52.335838   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.335849   38636 pod_ready.go:81] duration metric: took 7.012332045s waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.335855   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.339996   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:52.340004   38636 pod_ready.go:81] duration metric: took 4.146291ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:52.340010   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:54.351029   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:56.848497   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:58.850674   38636 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"False"
	I0906 15:59:59.347750   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.347764   38636 pod_ready.go:81] duration metric: took 7.009427345s waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.347771   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351913   38636 pod_ready.go:92] pod "kube-proxy-zss4k" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.351921   38636 pod_ready.go:81] duration metric: took 4.135355ms waiting for pod "kube-proxy-zss4k" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.351927   38636 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356071   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 15:59:59.356080   38636 pod_ready.go:81] duration metric: took 4.1483ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 15:59:59.356087   38636 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	I0906 16:00:01.365786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:03.365913   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:05.864397   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:07.865924   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:10.365936   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:12.864158   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:14.864836   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:16.865572   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:19.366603   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:21.863612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:23.865028   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:26.363858   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:28.364294   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:30.366125   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:32.865447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:35.362385   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:37.364530   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:39.863069   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:41.864919   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:44.363145   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:46.366591   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:48.863143   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:50.866878   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:53.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:55.364778   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:57.862437   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:00:59.863334   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:02.363223   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:04.864534   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:07.363948   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:09.862744   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:11.864192   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:14.364619   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:16.365257   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:18.864438   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:21.362761   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:23.364003   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:25.365931   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:27.862946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:29.864228   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:32.362786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:34.863359   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:37.365906   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:39.863888   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:42.362860   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:44.862363   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:46.864406   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:48.864866   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:50.866596   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:53.363229   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:55.864354   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:01:58.362250   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:00.862470   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:02.863209   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:04.864281   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:07.363645   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:09.364150   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:11.864765   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:13.865201   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:16.363299   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:18.862729   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:21.365287   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:23.865162   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:26.363102   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:28.363739   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:30.863089   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:32.863103   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:34.863473   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:36.863492   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:39.362249   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:41.364199   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:43.866447   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:46.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:48.363997   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:50.861977   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:52.867206   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:55.363783   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:57.364091   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:02:59.863017   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:01.866522   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:04.364983   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:06.862786   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:08.864389   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:11.363754   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:13.863197   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:16.364032   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:18.365612   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:20.365946   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:22.864232   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:25.362338   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:27.862126   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:29.863682   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:31.863972   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:33.865141   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:36.363045   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:38.865132   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:41.364203   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:43.863753   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:46.362812   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:48.864502   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:50.864576   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:53.363874   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:55.864828   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:58.362706   38636 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace has status "Ready":"False"
	I0906 16:03:59.356938   38636 pod_ready.go:81] duration metric: took 4m0.004474184s waiting for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" ...
	E0906 16:03:59.356974   38636 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c8fd5cf8-cdg6d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 16:03:59.356999   38636 pod_ready.go:38] duration metric: took 4m14.04989418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:03:59.357025   38636 kubeadm.go:631] restartCluster took 4m24.248696346s
	W0906 16:03:59.357127   38636 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0906 16:03:59.357149   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 16:04:03.698932   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.341781129s)
	I0906 16:04:03.698999   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:03.708822   38636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:04:03.716300   38636 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0906 16:04:03.716346   38636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:04:03.724386   38636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:04:03.724421   38636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 16:04:03.767530   38636 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
	I0906 16:04:03.767567   38636 kubeadm.go:317] [preflight] Running pre-flight checks
	I0906 16:04:03.863194   38636 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:04:03.863313   38636 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:04:03.863392   38636 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:04:03.985091   38636 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:04:04.009873   38636 out.go:204]   - Generating certificates and keys ...
	I0906 16:04:04.009938   38636 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0906 16:04:04.010013   38636 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0906 16:04:04.010092   38636 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 16:04:04.010151   38636 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0906 16:04:04.010224   38636 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 16:04:04.010326   38636 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0906 16:04:04.010382   38636 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0906 16:04:04.010428   38636 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0906 16:04:04.010506   38636 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 16:04:04.010568   38636 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 16:04:04.010599   38636 kubeadm.go:317] [certs] Using the existing "sa" key
	I0906 16:04:04.010644   38636 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:04:04.112141   38636 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:04:04.428252   38636 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:04:04.781321   38636 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:04:04.891466   38636 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:04:04.902953   38636 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:04:04.903733   38636 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:04:04.903840   38636 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0906 16:04:04.989147   38636 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:04:05.010782   38636 out.go:204]   - Booting up control plane ...
	I0906 16:04:05.010866   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:04:05.010943   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:04:05.011017   38636 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:04:05.011077   38636 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:04:05.011220   38636 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:04:10.494832   38636 kubeadm.go:317] [apiclient] All control plane components are healthy after 5.503264 seconds
	I0906 16:04:10.494909   38636 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:04:10.501767   38636 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:04:11.013788   38636 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:04:11.013935   38636 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220906155821-22187 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:04:11.519763   38636 kubeadm.go:317] [bootstrap-token] Using token: fqw8zb.b3unh498onihp969
	I0906 16:04:11.556084   38636 out.go:204]   - Configuring RBAC rules ...
	I0906 16:04:11.556186   38636 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:04:11.556258   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:04:11.595414   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:04:11.597593   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:04:11.600071   38636 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:04:11.602066   38636 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:04:11.608914   38636 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:04:11.744220   38636 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0906 16:04:11.927532   38636 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0906 16:04:11.936157   38636 kubeadm.go:317] 
	I0906 16:04:11.936239   38636 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0906 16:04:11.936251   38636 kubeadm.go:317] 
	I0906 16:04:11.936347   38636 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0906 16:04:11.936360   38636 kubeadm.go:317] 
	I0906 16:04:11.936397   38636 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0906 16:04:11.936483   38636 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:04:11.936535   38636 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:04:11.936545   38636 kubeadm.go:317] 
	I0906 16:04:11.936592   38636 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0906 16:04:11.936601   38636 kubeadm.go:317] 
	I0906 16:04:11.936648   38636 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:04:11.936660   38636 kubeadm.go:317] 
	I0906 16:04:11.936721   38636 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0906 16:04:11.936790   38636 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:04:11.936860   38636 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:04:11.936870   38636 kubeadm.go:317] 
	I0906 16:04:11.936973   38636 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:04:11.937041   38636 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0906 16:04:11.937049   38636 kubeadm.go:317] 
	I0906 16:04:11.937130   38636 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937205   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd \
	I0906 16:04:11.937225   38636 kubeadm.go:317] 	--control-plane 
	I0906 16:04:11.937230   38636 kubeadm.go:317] 
	I0906 16:04:11.937297   38636 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:04:11.937303   38636 kubeadm.go:317] 
	I0906 16:04:11.937368   38636 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token fqw8zb.b3unh498onihp969 \
	I0906 16:04:11.937490   38636 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:79ed1c35953c988ea900a472d926734e45582fa344b9f6d63efccdb3eeb551cd 
	I0906 16:04:11.940643   38636 kubeadm.go:317] W0906 23:04:03.783659    7834 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0906 16:04:11.940759   38636 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0906 16:04:11.940841   38636 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0906 16:04:11.940910   38636 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:04:11.940926   38636 cni.go:95] Creating CNI manager for ""
	I0906 16:04:11.940937   38636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:04:11.940954   38636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:04:11.941016   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:11.941027   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=b03dd9a575222c1597a06c17f8fb0088dcad17c4 minikube.k8s.io/name=embed-certs-20220906155821-22187 minikube.k8s.io/updated_at=2022_09_06T16_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.053740   38636 ops.go:34] apiserver oom_adj: -16
	I0906 16:04:12.053787   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:12.629790   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.129829   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:13.630701   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.129844   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:14.629847   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.129938   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:15.630450   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.129967   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:16.629971   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.130355   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:17.631117   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.130189   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:18.630017   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.131937   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:19.630247   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.130104   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:20.630932   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.129928   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:21.630617   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.129774   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:22.629879   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.129817   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:23.631908   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.129837   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.629870   38636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:04:24.693469   38636 kubeadm.go:1046] duration metric: took 12.752546325s to wait for elevateKubeSystemPrivileges.
	I0906 16:04:24.693487   38636 kubeadm.go:398] StartCluster complete in 4m49.621602402s
	I0906 16:04:24.693510   38636 settings.go:142] acquiring lock: {Name:mkbbe342b926ce28a122aef20480577f54f3e0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:24.693618   38636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 16:04:24.694416   38636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig: {Name:mkc3c2734e2020292c6469c3f5cd78e77548721b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:04:25.209438   38636 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220906155821-22187" rescaled to 1
	I0906 16:04:25.209475   38636 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:04:25.209488   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:04:25.209543   38636 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0906 16:04:25.248550   38636 out.go:177] * Verifying Kubernetes components...
	I0906 16:04:25.209701   38636 config.go:180] Loaded profile config "embed-certs-20220906155821-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 16:04:25.248613   38636 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248614   38636 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248617   38636 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.248621   38636 addons.go:65] Setting dashboard=true in profile "embed-certs-20220906155821-22187"
	I0906 16:04:25.274065   38636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:04:25.323012   38636 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323027   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:25.323031   38636 addons.go:153] Setting addon dashboard=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323035   38636 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.323041   38636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220906155821-22187"
	W0906 16:04:25.349810   38636 addons.go:162] addon storage-provisioner should already be in state true
	W0906 16:04:25.349817   38636 addons.go:162] addon metrics-server should already be in state true
	W0906 16:04:25.349808   38636 addons.go:162] addon dashboard should already be in state true
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.349908   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350008   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.350278   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351712   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351778   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.351905   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.372800   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.479636   38636 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.537415   38636 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0906 16:04:25.500699   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 16:04:25.537466   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 16:04:25.579923   38636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:04:25.616492   38636 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0906 16:04:25.580057   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.618390   38636 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220906155821-22187"
	I0906 16:04:25.675937   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 16:04:25.634198   38636 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.675960   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 16:04:25.654052   38636 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:25.676027   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0906 16:04:25.675946   38636 addons.go:162] addon default-storageclass should already be in state true
	I0906 16:04:25.676093   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676134   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.676180   38636 host.go:66] Checking if "embed-certs-20220906155821-22187" exists ...
	I0906 16:04:25.680582   38636 cli_runner.go:164] Run: docker container inspect embed-certs-20220906155821-22187 --format={{.State.Status}}
	I0906 16:04:25.694583   38636 node_ready.go:49] node "embed-certs-20220906155821-22187" has status "Ready":"True"
	I0906 16:04:25.694606   38636 node_ready.go:38] duration metric: took 18.642476ms waiting for node "embed-certs-20220906155821-22187" to be "Ready" ...
	I0906 16:04:25.694617   38636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:25.703428   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:25.769082   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.770815   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.771641   38636 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:25.771655   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:04:25.771721   38636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220906155821-22187
	I0906 16:04:25.771828   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.846515   38636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60235 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/embed-certs-20220906155821-22187/id_rsa Username:docker}
	I0906 16:04:25.908743   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 16:04:25.908759   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 16:04:25.923614   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:04:26.012628   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 16:04:26.012643   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 16:04:26.093532   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 16:04:26.093544   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0906 16:04:26.107106   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:04:26.111721   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 16:04:26.111737   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 16:04:26.197860   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 16:04:26.197879   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 16:04:26.222994   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 16:04:26.223005   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0906 16:04:26.290198   38636 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.290219   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 16:04:26.306943   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 16:04:26.306956   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 16:04:26.389305   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 16:04:26.404625   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 16:04:26.404642   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 16:04:26.502869   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 16:04:26.502883   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 16:04:26.586788   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 16:04:26.586801   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 16:04:26.602971   38636 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.602986   38636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 16:04:26.687833   38636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 16:04:26.989360   38636 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639629341s)
	I0906 16:04:26.989402   38636 start.go:810] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0906 16:04:27.019123   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095487172s)
	I0906 16:04:27.105458   38636 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220906155821-22187"
	I0906 16:04:27.721184   38636 pod_ready.go:92] pod "coredns-565d847f94-7hgsh" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:27.721200   38636 pod_ready.go:81] duration metric: took 2.017760025s waiting for pod "coredns-565d847f94-7hgsh" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.721212   38636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:27.884983   38636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.197113945s)
	I0906 16:04:27.919906   38636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0906 16:04:27.956698   38636 addons.go:414] enableAddons completed in 2.747190456s
	I0906 16:04:29.734002   38636 pod_ready.go:102] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"False"
	I0906 16:04:30.232781   38636 pod_ready.go:92] pod "coredns-565d847f94-hwccr" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.232795   38636 pod_ready.go:81] duration metric: took 2.511583495s waiting for pod "coredns-565d847f94-hwccr" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.232802   38636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241018   38636 pod_ready.go:92] pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.241028   38636 pod_ready.go:81] duration metric: took 8.220934ms waiting for pod "etcd-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.241036   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246347   38636 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.246358   38636 pod_ready.go:81] duration metric: took 5.317921ms waiting for pod "kube-apiserver-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.246365   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.251178   38636 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.271910   38636 pod_ready.go:81] duration metric: took 25.535498ms waiting for pod "kube-controller-manager-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.271928   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278165   38636 pod_ready.go:92] pod "kube-proxy-k97f9" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.278179   38636 pod_ready.go:81] duration metric: took 6.242796ms waiting for pod "kube-proxy-k97f9" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.278197   38636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630702   38636 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace has status "Ready":"True"
	I0906 16:04:30.630713   38636 pod_ready.go:81] duration metric: took 352.505269ms waiting for pod "kube-scheduler-embed-certs-20220906155821-22187" in "kube-system" namespace to be "Ready" ...
	I0906 16:04:30.630719   38636 pod_ready.go:38] duration metric: took 4.93610349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:04:30.630735   38636 api_server.go:51] waiting for apiserver process to appear ...
	I0906 16:04:30.630784   38636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:04:30.645666   38636 api_server.go:71] duration metric: took 5.436188155s to wait for apiserver process to appear ...
	I0906 16:04:30.645679   38636 api_server.go:87] waiting for apiserver healthz status ...
	I0906 16:04:30.645686   38636 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60239/healthz ...
	I0906 16:04:30.651159   38636 api_server.go:266] https://127.0.0.1:60239/healthz returned 200:
	ok
	I0906 16:04:30.652511   38636 api_server.go:140] control plane version: v1.25.0
	I0906 16:04:30.652524   38636 api_server.go:130] duration metric: took 6.840548ms to wait for apiserver health ...
	I0906 16:04:30.652530   38636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:04:30.833833   38636 system_pods.go:59] 9 kube-system pods found
	I0906 16:04:30.833849   38636 system_pods.go:61] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:30.833853   38636 system_pods.go:61] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:30.833859   38636 system_pods.go:61] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:30.833862   38636 system_pods.go:61] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:30.833867   38636 system_pods.go:61] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:30.833872   38636 system_pods.go:61] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:30.833878   38636 system_pods.go:61] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:30.833885   38636 system_pods.go:61] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:30.833893   38636 system_pods.go:61] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:30.833900   38636 system_pods.go:74] duration metric: took 181.366286ms to wait for pod list to return data ...
	I0906 16:04:30.833906   38636 default_sa.go:34] waiting for default service account to be created ...
	I0906 16:04:31.030564   38636 default_sa.go:45] found service account: "default"
	I0906 16:04:31.030579   38636 default_sa.go:55] duration metric: took 196.655364ms for default service account to be created ...
	I0906 16:04:31.030585   38636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 16:04:31.234390   38636 system_pods.go:86] 9 kube-system pods found
	I0906 16:04:31.234405   38636 system_pods.go:89] "coredns-565d847f94-7hgsh" [94873873-9734-4e1f-8114-f59e04819eec] Running
	I0906 16:04:31.234410   38636 system_pods.go:89] "coredns-565d847f94-hwccr" [14797c46-59df-423f-9376-8faa955f2426] Running
	I0906 16:04:31.234413   38636 system_pods.go:89] "etcd-embed-certs-20220906155821-22187" [eaf284d5-7ece-438d-bf12-b222518876cf] Running
	I0906 16:04:31.234417   38636 system_pods.go:89] "kube-apiserver-embed-certs-20220906155821-22187" [bf038e93-a5ca-48e4-af4c-8d906a875d3a] Running
	I0906 16:04:31.234427   38636 system_pods.go:89] "kube-controller-manager-embed-certs-20220906155821-22187" [a46c5bff-a2cf-4305-8fdd-37c601cb2e63] Running
	I0906 16:04:31.234434   38636 system_pods.go:89] "kube-proxy-k97f9" [36966060-5270-424c-a005-81413d70656a] Running
	I0906 16:04:31.234438   38636 system_pods.go:89] "kube-scheduler-embed-certs-20220906155821-22187" [164df980-70d4-464b-a513-b5174ff3b963] Running
	I0906 16:04:31.234445   38636 system_pods.go:89] "metrics-server-5c8fd5cf8-xq9zv" [73f275fe-7d42-400b-ad93-df387c9ed53d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 16:04:31.234449   38636 system_pods.go:89] "storage-provisioner" [1b1e6634-ac59-4ec2-82cd-aff20a4cc8cd] Running
	I0906 16:04:31.234455   38636 system_pods.go:126] duration metric: took 203.86794ms to wait for k8s-apps to be running ...
	I0906 16:04:31.234461   38636 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 16:04:31.234511   38636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:04:31.244461   38636 system_svc.go:56] duration metric: took 9.993449ms WaitForService to wait for kubelet.
	I0906 16:04:31.244474   38636 kubeadm.go:573] duration metric: took 6.035000594s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 16:04:31.244487   38636 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:04:31.430989   38636 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0906 16:04:31.431001   38636 node_conditions.go:123] node cpu capacity is 6
	I0906 16:04:31.431008   38636 node_conditions.go:105] duration metric: took 186.51865ms to run NodePressure ...
	I0906 16:04:31.431017   38636 start.go:216] waiting for startup goroutines ...
	I0906 16:04:31.467536   38636 start.go:506] kubectl: 1.25.0, cluster: 1.25.0 (minor skew: 0)
	I0906 16:04:31.509529   38636 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220906155821-22187" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 23:14:25 UTC. --
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.528204599Z" level=info msg="Processing signal 'terminated'"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529151410Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[131]: time="2022-09-06T22:47:31.529777222Z" level=info msg="Daemon shutdown complete"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: docker.service: Succeeded.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.588828648Z" level=info msg="Starting up"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590571788Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590605888Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590631004Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.590641853Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591550398Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591603148Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591645967Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.591685874Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.595222522Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.599079518Z" level=info msg="Loading containers: start."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.676228835Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.708132289Z" level=info msg="Loading containers: done."
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716192633Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.716331649Z" level=info msg="Daemon has completed initialization"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 systemd[1]: Started Docker Application Container Engine.
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.738785771Z" level=info msg="API listen on [::]:2376"
	Sep 06 22:47:31 old-k8s-version-20220906154143-22187 dockerd[428]: time="2022-09-06T22:47:31.741578122Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-09-06T23:14:28Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:14:28 up  1:30,  0 users,  load average: 0.52, 0.44, 0.68
	Linux old-k8s-version-20220906154143-22187 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-09-06 22:47:29 UTC, end at Tue 2022-09-06 23:14:28 UTC. --
	Sep 06 23:14:26 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: I0906 23:14:27.540694   34078 server.go:410] Version: v1.16.0
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: I0906 23:14:27.541096   34078 plugins.go:100] No cloud provider specified.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: I0906 23:14:27.541138   34078 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: I0906 23:14:27.543019   34078 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: W0906 23:14:27.545256   34078 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: W0906 23:14:27.545342   34078 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 kubelet[34078]: F0906 23:14:27.545365   34078 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 23:14:27 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: I0906 23:14:28.278489   34111 server.go:410] Version: v1.16.0
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: I0906 23:14:28.278705   34111 plugins.go:100] No cloud provider specified.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: I0906 23:14:28.278715   34111 server.go:773] Client rotation is on, will bootstrap in background
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: I0906 23:14:28.280420   34111 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: W0906 23:14:28.281102   34111 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: W0906 23:14:28.281164   34111 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 kubelet[34111]: F0906 23:14:28.281186   34111 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 23:14:28 old-k8s-version-20220906154143-22187 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 16:14:28.186385   39963 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 2 (417.42785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220906154143-22187" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.84s)

                                                
                                    

Test pass (245/287)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.33
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.0/json-events 6.79
11 TestDownloadOnly/v1.25.0/preload-exists 0
14 TestDownloadOnly/v1.25.0/kubectl 0
15 TestDownloadOnly/v1.25.0/LogsDuration 0.28
16 TestDownloadOnly/DeleteAll 0.73
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.42
18 TestDownloadOnlyKic 12.59
19 TestBinaryMirror 1.66
20 TestOffline 48.17
22 TestAddons/Setup 189.62
26 TestAddons/parallel/MetricsServer 5.6
27 TestAddons/parallel/HelmTiller 11.16
29 TestAddons/parallel/CSI 43.88
30 TestAddons/parallel/Headlamp 11.3
32 TestAddons/serial/GCPAuth 15.51
33 TestAddons/StoppedEnableDisable 12.97
34 TestCertOptions 30.18
35 TestCertExpiration 235.84
36 TestDockerFlags 29.49
37 TestForceSystemdFlag 30.5
38 TestForceSystemdEnv 34.09
40 TestHyperKitDriverInstallOrUpdate 7.25
43 TestErrorSpam/setup 30.92
44 TestErrorSpam/start 2.17
45 TestErrorSpam/status 1.28
46 TestErrorSpam/pause 1.83
47 TestErrorSpam/unpause 1.95
48 TestErrorSpam/stop 13.06
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 43.45
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 38.89
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.07
59 TestFunctional/serial/CacheCmd/cache/add_remote 5.22
60 TestFunctional/serial/CacheCmd/cache/add_local 1.83
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.33
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.49
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
68 TestFunctional/serial/ExtraConfig 53.15
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 3.09
71 TestFunctional/serial/LogsFileCmd 3.13
73 TestFunctional/parallel/ConfigCmd 0.47
74 TestFunctional/parallel/DashboardCmd 8.63
75 TestFunctional/parallel/DryRun 1.63
76 TestFunctional/parallel/InternationalLanguage 0.66
77 TestFunctional/parallel/StatusCmd 1.28
80 TestFunctional/parallel/ServiceCmd 12.84
82 TestFunctional/parallel/AddonsCmd 0.27
83 TestFunctional/parallel/PersistentVolumeClaim 25.32
85 TestFunctional/parallel/SSHCmd 0.89
86 TestFunctional/parallel/CpCmd 1.62
87 TestFunctional/parallel/MySQL 23.87
88 TestFunctional/parallel/FileSync 0.56
89 TestFunctional/parallel/CertSync 2.82
93 TestFunctional/parallel/NodeLabels 0.06
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
98 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
100 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
102 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
106 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
108 TestFunctional/parallel/ProfileCmd/profile_list 0.51
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
110 TestFunctional/parallel/MountCmd/any-port 9.48
111 TestFunctional/parallel/MountCmd/specific-port 2.57
112 TestFunctional/parallel/DockerEnv/bash 1.69
113 TestFunctional/parallel/Version/short 0.14
114 TestFunctional/parallel/Version/components 0.63
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
119 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
120 TestFunctional/parallel/ImageCommands/Setup 1.78
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.11
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.41
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.49
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.14
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.83
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.16
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.56
131 TestFunctional/delete_addon-resizer_images 0.16
132 TestFunctional/delete_my-image_image 0.06
133 TestFunctional/delete_minikube_cached_images 0.06
143 TestJSONOutput/start/Command 45.4
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.64
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.65
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.25
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.76
168 TestKicCustomNetwork/create_custom_network 34.33
169 TestKicCustomNetwork/use_default_bridge_network 32.42
170 TestKicExistingNetwork 32.73
171 TestKicCustomSubnet 34.17
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 67.52
176 TestMountStart/serial/StartWithMountFirst 7.76
177 TestMountStart/serial/VerifyMountFirst 0.42
178 TestMountStart/serial/StartWithMountSecond 7.71
179 TestMountStart/serial/VerifyMountSecond 0.41
180 TestMountStart/serial/DeleteFirst 2.21
181 TestMountStart/serial/VerifyMountPostDelete 0.41
182 TestMountStart/serial/Stop 1.62
183 TestMountStart/serial/RestartStopped 5.63
184 TestMountStart/serial/VerifyMountPostStop 0.46
187 TestMultiNode/serial/FreshStart2Nodes 109.68
188 TestMultiNode/serial/DeployApp2Nodes 4.24
189 TestMultiNode/serial/PingHostFrom2Pods 0.86
190 TestMultiNode/serial/AddNode 25.01
191 TestMultiNode/serial/ProfileList 0.48
192 TestMultiNode/serial/CopyFile 15.07
193 TestMultiNode/serial/StopNode 13.92
194 TestMultiNode/serial/StartAfterStop 19.2
196 TestMultiNode/serial/DeleteNode 7.93
197 TestMultiNode/serial/StopMultiNode 25
199 TestMultiNode/serial/ValidateNameConflict 35.09
205 TestScheduledStopUnix 101.48
206 TestSkaffold 59.65
208 TestInsufficientStorage 12.31
224 TestStoppedBinaryUpgrade/Setup 0.78
226 TestStoppedBinaryUpgrade/MinikubeLogs 3.5
235 TestPause/serial/Start 43.91
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
239 TestNoKubernetes/serial/StartWithK8s 30.2
240 TestNoKubernetes/serial/StartWithStopK8s 17.14
241 TestNoKubernetes/serial/Start 6.78
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
243 TestNoKubernetes/serial/ProfileList 4.59
244 TestNoKubernetes/serial/Stop 1.68
245 TestNoKubernetes/serial/StartNoArgs 5.25
246 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.82
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.11
249 TestNetworkPlugins/group/calico/Start 309.51
250 TestNetworkPlugins/group/auto/Start 44.79
251 TestNetworkPlugins/group/auto/KubeletFlags 0.43
252 TestNetworkPlugins/group/auto/NetCatPod 10.19
253 TestNetworkPlugins/group/auto/DNS 0.12
254 TestNetworkPlugins/group/auto/Localhost 0.11
255 TestNetworkPlugins/group/auto/HairPin 5.12
256 TestNetworkPlugins/group/false/Start 44.79
257 TestNetworkPlugins/group/false/KubeletFlags 0.42
258 TestNetworkPlugins/group/false/NetCatPod 10.18
259 TestNetworkPlugins/group/false/DNS 0.12
260 TestNetworkPlugins/group/false/Localhost 0.11
261 TestNetworkPlugins/group/false/HairPin 5.12
262 TestNetworkPlugins/group/kindnet/Start 49.7
263 TestNetworkPlugins/group/calico/ControllerPod 5.02
264 TestNetworkPlugins/group/calico/KubeletFlags 0.43
265 TestNetworkPlugins/group/calico/NetCatPod 11.2
266 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
267 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
268 TestNetworkPlugins/group/calico/DNS 0.12
269 TestNetworkPlugins/group/calico/Localhost 0.12
270 TestNetworkPlugins/group/kindnet/NetCatPod 12.21
271 TestNetworkPlugins/group/calico/HairPin 0.13
272 TestNetworkPlugins/group/enable-default-cni/Start 46.75
273 TestNetworkPlugins/group/kindnet/DNS 0.12
274 TestNetworkPlugins/group/kindnet/Localhost 0.12
275 TestNetworkPlugins/group/kindnet/HairPin 0.11
276 TestNetworkPlugins/group/bridge/Start 45.5
277 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
278 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
279 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
280 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
281 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
282 TestNetworkPlugins/group/bridge/KubeletFlags 0.53
283 TestNetworkPlugins/group/bridge/NetCatPod 11.21
284 TestNetworkPlugins/group/kubenet/Start 45.1
285 TestNetworkPlugins/group/bridge/DNS 0.11
286 TestNetworkPlugins/group/bridge/Localhost 0.11
287 TestNetworkPlugins/group/bridge/HairPin 0.12
288 TestNetworkPlugins/group/cilium/Start 73.89
289 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
290 TestNetworkPlugins/group/kubenet/NetCatPod 11.19
291 TestNetworkPlugins/group/kubenet/DNS 0.14
292 TestNetworkPlugins/group/kubenet/Localhost 0.12
294 TestNetworkPlugins/group/cilium/ControllerPod 5.02
295 TestNetworkPlugins/group/cilium/KubeletFlags 0.42
296 TestNetworkPlugins/group/cilium/NetCatPod 10.62
297 TestNetworkPlugins/group/cilium/DNS 0.12
298 TestNetworkPlugins/group/cilium/Localhost 0.13
299 TestNetworkPlugins/group/cilium/HairPin 0.11
303 TestStartStop/group/no-preload/serial/FirstStart 51.16
304 TestStartStop/group/no-preload/serial/DeployApp 13.27
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
306 TestStartStop/group/no-preload/serial/Stop 12.43
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
308 TestStartStop/group/no-preload/serial/SecondStart 304.26
311 TestStartStop/group/old-k8s-version/serial/Stop 1.62
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
319 TestStartStop/group/default-k8s-different-port/serial/FirstStart 44.92
320 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.27
321 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.79
322 TestStartStop/group/default-k8s-different-port/serial/Stop 12.51
323 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.37
324 TestStartStop/group/default-k8s-different-port/serial/SecondStart 297.28
325 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 6.02
326 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.09
327 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.47
331 TestStartStop/group/newest-cni/serial/FirstStart 40.73
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.79
334 TestStartStop/group/newest-cni/serial/Stop 12.47
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
336 TestStartStop/group/newest-cni/serial/SecondStart 17.72
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
342 TestStartStop/group/embed-certs/serial/FirstStart 42.95
343 TestStartStop/group/embed-certs/serial/DeployApp 12.27
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
345 TestStartStop/group/embed-certs/serial/Stop 12.53
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
347 TestStartStop/group/embed-certs/serial/SecondStart 301.82
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
350 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.45
x
+
TestDownloadOnly/v1.16.0/json-events (19.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220906144354-22187 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220906144354-22187 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (19.32616605s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220906144354-22187
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220906144354-22187: exit status 85 (295.171228ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220906144354-22187 | jenkins | v1.26.1 | 06 Sep 22 14:43 PDT |          |
	|         | download-only-20220906144354-22187 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 14:43:54
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 14:43:54.688826   22209 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:43:54.689083   22209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:43:54.689088   22209 out.go:309] Setting ErrFile to fd 2...
	I0906 14:43:54.689092   22209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:43:54.689188   22209 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	W0906 14:43:54.689297   22209 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/config/config.json: no such file or directory
	I0906 14:43:54.689972   22209 out.go:303] Setting JSON to true
	I0906 14:43:54.705374   22209 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6205,"bootTime":1662494429,"procs":335,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 14:43:54.705471   22209 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 14:43:54.731972   22209 out.go:97] [download-only-20220906144354-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 14:43:54.732086   22209 notify.go:193] Checking for updates...
	W0906 14:43:54.732104   22209 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 14:43:54.752888   22209 out.go:169] MINIKUBE_LOCATION=14848
	I0906 14:43:54.775024   22209 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 14:43:54.802040   22209 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 14:43:54.822997   22209 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 14:43:54.844037   22209 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	W0906 14:43:54.886887   22209 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 14:43:54.887122   22209 driver.go:365] Setting default libvirt URI to qemu:///system
	W0906 14:43:54.947375   22209 docker.go:113] docker version returned error: exit status 1
	I0906 14:43:54.968202   22209 out.go:97] Using the docker driver based on user configuration
	I0906 14:43:54.968223   22209 start.go:284] selected driver: docker
	I0906 14:43:54.968241   22209 start.go:808] validating driver "docker" against <nil>
	I0906 14:43:54.968336   22209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:43:55.100091   22209 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/li
b/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:43:55.121700   22209 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0906 14:43:55.142541   22209 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0906 14:43:55.184397   22209 out.go:169] 
	W0906 14:43:55.205557   22209 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0906 14:43:55.226595   22209 out.go:169] 
	I0906 14:43:55.268583   22209 out.go:169] 
	W0906 14:43:55.289777   22209 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0906 14:43:55.289866   22209 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0906 14:43:55.289907   22209 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0906 14:43:55.310478   22209 out.go:169] 
	I0906 14:43:55.331512   22209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:43:55.465273   22209 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/li
b/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	W0906 14:43:55.486983   22209 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0906 14:43:55.487033   22209 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0906 14:43:55.530872   22209 out.go:169] 
	W0906 14:43:55.551852   22209 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0906 14:43:55.551958   22209 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0906 14:43:55.551984   22209 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0906 14:43:55.572693   22209 out.go:169] 
	I0906 14:43:55.621710   22209 out.go:169] 
	W0906 14:43:55.642895   22209 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0906 14:43:55.663817   22209 out.go:169] 
	I0906 14:43:55.684640   22209 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0906 14:43:55.684764   22209 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 14:43:55.705782   22209 out.go:169] Using Docker Desktop driver with root privileges
	I0906 14:43:55.726844   22209 cni.go:95] Creating CNI manager for ""
	I0906 14:43:55.726864   22209 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 14:43:55.726876   22209 start_flags.go:310] config:
	{Name:download-only-20220906144354-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220906144354-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:43:55.747640   22209 out.go:97] Starting control plane node download-only-20220906144354-22187 in cluster download-only-20220906144354-22187
	I0906 14:43:55.747681   22209 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 14:43:55.768726   22209 out.go:97] Pulling base image ...
	I0906 14:43:55.768756   22209 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 14:43:55.768794   22209 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 14:43:55.768939   22209 cache.go:107] acquiring lock: {Name:mk7078dbe496c905d4928b9b07d4fb130f0f8e99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.768965   22209 cache.go:107] acquiring lock: {Name:mkbe78f1b87581e975f28ba1c969004987f580e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.769805   22209 cache.go:107] acquiring lock: {Name:mk8ec601544bb5c436c532b563c312184e48a4fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.770016   22209 cache.go:107] acquiring lock: {Name:mk5c7fa2370bf5670cd0ca7be4034f8b4e5efab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.770062   22209 cache.go:107] acquiring lock: {Name:mk5179088f5b6e6e3e6b8542468ef07a0b7d7865 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.770091   22209 cache.go:107] acquiring lock: {Name:mkd9500fe266c8cfd5562cd51d1066885480f01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.770259   22209 cache.go:107] acquiring lock: {Name:mk8377a69155109d3b425c8770d5f3ccd61db871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.770172   22209 cache.go:107] acquiring lock: {Name:mk3b9d31ca177f2d9424aa0459f8d2ecd2516d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 14:43:55.771252   22209 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 14:43:55.771144   22209 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0906 14:43:55.771309   22209 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0906 14:43:55.771387   22209 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/download-only-20220906144354-22187/config.json ...
	I0906 14:43:55.771419   22209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/download-only-20220906144354-22187/config.json: {Name:mk7f1b97a5a0426476bc3bf739f718d301a215fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 14:43:55.771418   22209 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0906 14:43:55.771458   22209 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0906 14:43:55.771526   22209 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0906 14:43:55.771586   22209 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0906 14:43:55.771615   22209 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0906 14:43:55.771871   22209 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 14:43:55.772336   22209 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0906 14:43:55.772343   22209 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0906 14:43:55.772357   22209 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0906 14:43:55.779464   22209 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.779535   22209 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.779776   22209 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.781926   22209 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.782050   22209 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.782077   22209 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.782084   22209 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.782090   22209 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0906 14:43:55.841635   22209 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d to local cache
	I0906 14:43:55.841858   22209 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local cache directory
	I0906 14:43:55.842015   22209 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d to local cache
	I0906 14:43:56.288633   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 14:43:56.995943   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 14:43:56.995963   22209 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.227033697s
	I0906 14:43:56.995976   22209 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 14:43:58.121251   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0906 14:43:58.332040   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0906 14:43:58.347769   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0906 14:43:58.430783   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0906 14:43:58.431698   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0906 14:43:58.444949   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0906 14:43:58.449433   22209 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0906 14:43:58.504784   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0906 14:43:58.504799   22209 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 2.735789244s
	I0906 14:43:58.504808   22209 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0906 14:43:58.781427   22209 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0906 14:44:01.300752   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0906 14:44:01.300772   22209 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 5.530713869s
	I0906 14:44:01.300781   22209 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0906 14:44:02.697623   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0906 14:44:02.697639   22209 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 6.928671229s
	I0906 14:44:02.697656   22209 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0906 14:44:03.442449   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0906 14:44:03.442465   22209 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 7.67257048s
	I0906 14:44:03.442476   22209 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0906 14:44:04.176828   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0906 14:44:04.176849   22209 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 8.407041805s
	I0906 14:44:04.176860   22209 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0906 14:44:04.408629   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0906 14:44:04.408647   22209 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 8.639634557s
	I0906 14:44:04.408655   22209 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0906 14:44:09.643968   22209 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0906 14:44:09.643992   22209 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 13.874067665s
	I0906 14:44:09.644000   22209 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0906 14:44:09.644017   22209 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220906144354-22187"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/json-events (6.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220906144354-22187 --force --alsologtostderr --kubernetes-version=v1.25.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220906144354-22187 --force --alsologtostderr --kubernetes-version=v1.25.0 --container-runtime=docker --driver=docker : (6.786357543s)
--- PASS: TestDownloadOnly/v1.25.0/json-events (6.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/preload-exists
--- PASS: TestDownloadOnly/v1.25.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/kubectl
--- PASS: TestDownloadOnly/v1.25.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220906144354-22187
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220906144354-22187: exit status 85 (283.389905ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220906144354-22187 | jenkins | v1.26.1 | 06 Sep 22 14:43 PDT |          |
	|         | download-only-20220906144354-22187 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220906144354-22187 | jenkins | v1.26.1 | 06 Sep 22 14:44 PDT |          |
	|         | download-only-20220906144354-22187 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.25.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/06 14:44:14
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 14:44:14.542221   22739 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:44:14.542483   22739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:44:14.542488   22739 out.go:309] Setting ErrFile to fd 2...
	I0906 14:44:14.542494   22739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:44:14.542600   22739 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	W0906 14:44:14.542689   22739 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/config/config.json: no such file or directory
	I0906 14:44:14.543031   22739 out.go:303] Setting JSON to true
	I0906 14:44:14.558094   22739 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6225,"bootTime":1662494429,"procs":341,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 14:44:14.558232   22739 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 14:44:14.580599   22739 out.go:97] [download-only-20220906144354-22187] minikube v1.26.1 on Darwin 12.5.1
	W0906 14:44:14.580798   22739 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 14:44:14.580813   22739 notify.go:193] Checking for updates...
	I0906 14:44:14.602009   22739 out.go:169] MINIKUBE_LOCATION=14848
	I0906 14:44:14.623441   22739 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 14:44:14.645284   22739 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 14:44:14.666385   22739 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 14:44:14.688617   22739 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	W0906 14:44:14.735971   22739 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 14:44:14.736393   22739 config.go:180] Loaded profile config "download-only-20220906144354-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0906 14:44:14.736445   22739 start.go:716] api.Load failed for download-only-20220906144354-22187: filestore "download-only-20220906144354-22187": Docker machine "download-only-20220906144354-22187" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 14:44:14.736492   22739 driver.go:365] Setting default libvirt URI to qemu:///system
	W0906 14:44:14.736512   22739 start.go:716] api.Load failed for download-only-20220906144354-22187: filestore "download-only-20220906144354-22187": Docker machine "download-only-20220906144354-22187" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 14:44:14.801322   22739 docker.go:137] docker version: linux-20.10.17
	I0906 14:44:14.801451   22739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:44:14.929243   22739 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-09-06 21:44:14.861627429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:44:14.951339   22739 out.go:97] Using the docker driver based on existing profile
	I0906 14:44:14.951371   22739 start.go:284] selected driver: docker
	I0906 14:44:14.951377   22739 start.go:808] validating driver "docker" against &{Name:download-only-20220906144354-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220906144354-22187 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:44:14.951591   22739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:44:15.079264   22739 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-09-06 21:44:15.012804239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:44:15.081430   22739 cni.go:95] Creating CNI manager for ""
	I0906 14:44:15.081544   22739 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0906 14:44:15.081570   22739 start_flags.go:310] config:
	{Name:download-only-20220906144354-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:download-only-20220906144354-22187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:44:15.103406   22739 out.go:97] Starting control plane node download-only-20220906144354-22187 in cluster download-only-20220906144354-22187
	I0906 14:44:15.103461   22739 cache.go:120] Beginning downloading kic base image for docker with docker
	I0906 14:44:15.125065   22739 out.go:97] Pulling base image ...
	I0906 14:44:15.125193   22739 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local docker daemon
	I0906 14:44:15.125233   22739 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 14:44:15.187271   22739 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d to local cache
	I0906 14:44:15.187447   22739 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local cache directory
	I0906 14:44:15.187463   22739 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d in local cache directory, skipping pull
	I0906 14:44:15.187469   22739 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d exists in cache, skipping pull
	I0906 14:44:15.187477   22739 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d as a tarball
	I0906 14:44:15.191584   22739 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.0/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	I0906 14:44:15.191606   22739 cache.go:57] Caching tarball of preloaded images
	I0906 14:44:15.191818   22739 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
	I0906 14:44:15.213942   22739 out.go:97] Downloading Kubernetes v1.25.0 preload ...
	I0906 14:44:15.213979   22739 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 14:44:15.309911   22739 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.0/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4?checksum=md5:e6de79397281dbe550a1d4399b254698 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220906144354-22187"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.73s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220906144354-22187
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220906144423-22187 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220906144423-22187 --force --alsologtostderr --driver=docker : (11.469720541s)
helpers_test.go:175: Cleaning up "download-docker-20220906144423-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220906144423-22187
--- PASS: TestDownloadOnlyKic (12.59s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220906144435-22187 --alsologtostderr --binary-mirror http://127.0.0.1:55431 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220906144435-22187 --alsologtostderr --binary-mirror http://127.0.0.1:55431 --driver=docker : (1.006748345s)
helpers_test.go:175: Cleaning up "binary-mirror-20220906144435-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220906144435-22187
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestOffline (48.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220906152522-22187 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220906152522-22187 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (45.441380065s)
helpers_test.go:175: Cleaning up "offline-docker-20220906152522-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220906152522-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220906152522-22187: (2.726910132s)
--- PASS: TestOffline (48.17s)

                                                
                                    
x
+
TestAddons/Setup (189.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220906144437-22187 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220906144437-22187 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m9.614885215s)
--- PASS: TestAddons/Setup (189.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.048447ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-n7mkp" [d105bcae-2ff9-4a12-8cea-644878a0c6f3] Running
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008152854s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220906144437-22187 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.112657ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-qww4j" [261122c2-42b7-46a0-9f54-9801eaceebba] Running
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007814155s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220906144437-22187 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220906144437-22187 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.700556133s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 8.705757ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220906144437-22187 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [a021c136-8261-43d6-9ff9-fd5ff7c52285] Pending
helpers_test.go:342: "task-pv-pod" [a021c136-8261-43d6-9ff9-fd5ff7c52285] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [a021c136-8261-43d6-9ff9-fd5ff7c52285] Running
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.006313894s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220906144437-22187 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220906144437-22187 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete pod task-pv-pod
addons_test.go:546: (dbg) Done: kubectl --context addons-20220906144437-22187 delete pod task-pv-pod: (1.105969815s)
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220906144437-22187 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [a02c0431-ac17-47fa-a87e-754d1fde1ca5] Pending
helpers_test.go:342: "task-pv-pod-restore" [a02c0431-ac17-47fa-a87e-754d1fde1ca5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [a02c0431-ac17-47fa-a87e-754d1fde1ca5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.009664432s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.796456757s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220906144437-22187 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220906144437-22187 --alsologtostderr -v=1: (1.293291904s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-788c8d94dd-xn4g5" [5ca0fbe7-a280-4930-906e-7571a0fd0017] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-788c8d94dd-xn4g5" [5ca0fbe7-a280-4930-906e-7571a0fd0017] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.006263484s
--- PASS: TestAddons/parallel/Headlamp (11.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220906144437-22187 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220906144437-22187 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [34e43e40-aa70-4c81-9ab3-1a16739e3ea6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [34e43e40-aa70-4c81-9ab3-1a16739e3ea6] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006544526s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220906144437-22187 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220906144437-22187 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220906144437-22187 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220906144437-22187 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220906144437-22187 addons disable gcp-auth --alsologtostderr -v=1: (6.608145243s)
--- PASS: TestAddons/serial/GCPAuth (15.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220906144437-22187
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220906144437-22187: (12.535818666s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220906144437-22187
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220906144437-22187
--- PASS: TestAddons/StoppedEnableDisable (12.97s)

                                                
                                    
x
+
TestCertOptions (30.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220906153258-22187 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220906153258-22187 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (26.567392054s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220906153258-22187 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220906153258-22187 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220906153258-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220906153258-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220906153258-22187: (2.700899829s)
--- PASS: TestCertOptions (30.18s)

                                                
                                    
x
+
TestCertExpiration (235.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220906153156-22187 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220906153156-22187 --memory=2048 --cert-expiration=3m --driver=docker : (29.285657582s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220906153156-22187 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220906153156-22187 --memory=2048 --cert-expiration=8760h --driver=docker : (23.859991157s)
helpers_test.go:175: Cleaning up "cert-expiration-20220906153156-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220906153156-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220906153156-22187: (2.690450422s)
--- PASS: TestCertExpiration (235.84s)

                                                
                                    
x
+
TestDockerFlags (29.49s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220906153228-22187 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0906 15:32:40.817051   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:32:41.269093   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:32:47.083531   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220906153228-22187 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (25.922062822s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220906153228-22187 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220906153228-22187 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220906153228-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220906153228-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220906153228-22187: (2.704974686s)
--- PASS: TestDockerFlags (29.49s)

                                                
                                    
x
+
TestForceSystemdFlag (30.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220906153125-22187 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220906153125-22187 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (27.100663528s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220906153125-22187 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220906153125-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220906153125-22187

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220906153125-22187: (2.905124042s)
--- PASS: TestForceSystemdFlag (30.50s)

                                                
                                    
x
+
TestForceSystemdEnv (34.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220906153154-22187 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220906153154-22187 --memory=2048 --alsologtostderr -v=5 --driver=docker : (30.830739346s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220906153154-22187 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:175: Cleaning up "force-systemd-env-20220906153154-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220906153154-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220906153154-22187: (2.65253789s)
--- PASS: TestForceSystemdEnv (34.09s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.25s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.25s)

                                                
                                    
x
+
TestErrorSpam/setup (30.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220906144914-22187 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220906144914-22187 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 --driver=docker : (30.915877971s)
--- PASS: TestErrorSpam/setup (30.92s)

                                                
                                    
x
+
TestErrorSpam/start (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 start --dry-run
--- PASS: TestErrorSpam/start (2.17s)

                                                
                                    
x
+
TestErrorSpam/status (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 status
--- PASS: TestErrorSpam/status (1.28s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (13.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 stop: (12.385212136s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220906144914-22187 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220906144914-22187 stop
--- PASS: TestErrorSpam/stop (13.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/files/etc/test/nested/copy/22187/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (43.447903733s)
--- PASS: TestFunctional/serial/StartWithProxy (43.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --alsologtostderr -v=8: (38.886728152s)
functional_test.go:655: soft start took 38.887384006s for "functional-20220906145007-22187" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220906145007-22187 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add k8s.gcr.io/pause:3.3: (2.677914162s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add k8s.gcr.io/pause:latest: (1.550064575s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3752515141/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add minikube-local-cache-test:functional-20220906145007-22187
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache add minikube-local-cache-test:functional-20220906145007-22187: (1.319474567s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache delete minikube-local-cache-test:functional-20220906145007-22187
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220906145007-22187
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (411.653563ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 cache reload: (1.039003117s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 kubectl -- --context functional-20220906145007-22187 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220906145007-22187 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.149626109s)
functional_test.go:753: restart took 53.149763575s for "functional-20220906145007-22187" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (53.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220906145007-22187 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 logs: (3.084634736s)
--- PASS: TestFunctional/serial/LogsCmd (3.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3275921030/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3275921030/001/logs.txt: (3.12890855s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 config get cpus: exit status 14 (52.626545ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 config get cpus: exit status 14 (54.573213ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220906145007-22187 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220906145007-22187 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 24775: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (754.72328ms)

                                                
                                                
-- stdout --
	* [functional-20220906145007-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 14:53:16.815113   24669 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:53:16.815289   24669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:16.815294   24669 out.go:309] Setting ErrFile to fd 2...
	I0906 14:53:16.815298   24669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:16.815400   24669 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 14:53:16.815827   24669 out.go:303] Setting JSON to false
	I0906 14:53:16.831619   24669 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6767,"bootTime":1662494429,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 14:53:16.831716   24669 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 14:53:16.853634   24669 out.go:177] * [functional-20220906145007-22187] minikube v1.26.1 on Darwin 12.5.1
	I0906 14:53:16.916244   24669 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 14:53:16.958419   24669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 14:53:17.000519   24669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 14:53:17.042387   24669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 14:53:17.084482   24669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 14:53:17.105751   24669 config.go:180] Loaded profile config "functional-20220906145007-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 14:53:17.106142   24669 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 14:53:17.175067   24669 docker.go:137] docker version: linux-20.10.17
	I0906 14:53:17.175239   24669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:53:17.308155   24669 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-09-06 21:53:17.239575921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:53:17.351278   24669 out.go:177] * Using the docker driver based on existing profile
	I0906 14:53:17.388846   24669 start.go:284] selected driver: docker
	I0906 14:53:17.388857   24669 start.go:808] validating driver "docker" against &{Name:functional-20220906145007-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:functional-20220906145007-22187 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:53:17.388989   24669 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 14:53:17.433236   24669 out.go:177] 
	W0906 14:53:17.453952   24669 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 14:53:17.475011   24669 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220906145007-22187 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (660.684486ms)

                                                
                                                
-- stdout --
	* [functional-20220906145007-22187] minikube v1.26.1 sur Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 14:53:16.151603   24644 out.go:296] Setting OutFile to fd 1 ...
	I0906 14:53:16.151752   24644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:16.151757   24644 out.go:309] Setting ErrFile to fd 2...
	I0906 14:53:16.151761   24644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 14:53:16.151883   24644 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 14:53:16.152305   24644 out.go:303] Setting JSON to false
	I0906 14:53:16.168554   24644 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6767,"bootTime":1662494429,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5.1","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0906 14:53:16.168649   24644 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0906 14:53:16.193584   24644 out.go:177] * [functional-20220906145007-22187] minikube v1.26.1 sur Darwin 12.5.1
	I0906 14:53:16.234465   24644 out.go:177]   - MINIKUBE_LOCATION=14848
	I0906 14:53:16.255585   24644 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	I0906 14:53:16.297374   24644 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 14:53:16.339438   24644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 14:53:16.381373   24644 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	I0906 14:53:16.402944   24644 config.go:180] Loaded profile config "functional-20220906145007-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 14:53:16.403539   24644 driver.go:365] Setting default libvirt URI to qemu:///system
	I0906 14:53:16.473748   24644 docker.go:137] docker version: linux-20.10.17
	I0906 14:53:16.473908   24644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 14:53:16.605981   24644 info.go:265] docker info: {ID:HSF4:724Q:33A4:LV3T:VEUE:HFH5:DXTR:C6CE:AQR6:XBHD:TCTB:UPJH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-09-06 21:53:16.537195604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0906 14:53:16.627727   24644 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0906 14:53:16.648480   24644 start.go:284] selected driver: docker
	I0906 14:53:16.648496   24644 start.go:808] validating driver "docker" against &{Name:functional-20220906145007-22187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.33-1661795577-14482@sha256:e92c29880a4b3b095ed3b61b1f4a696b57c5cd5212bc8256f9599a777020645d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:functional-20220906145007-22187 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0906 14:53:16.648646   24644 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 14:53:16.673364   24644 out.go:177] 
	W0906 14:53:16.694567   24644 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 14:53:16.720481   24644 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220906145007-22187 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220906145007-22187 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-lwlhr" [28af606e-f107-4ab2-aa7e-6810932a77aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-5fcdfb5cc4-lwlhr" [28af606e-f107-4ab2-aa7e-6810932a77aa] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.00690756s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 service --namespace=default --https --url hello-node: (2.028087068s)
functional_test.go:1475: found endpoint: https://127.0.0.1:56187
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 service hello-node --url --format={{.IP}}: (2.025032403s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 service hello-node --url: (2.027532514s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:56216
--- PASS: TestFunctional/parallel/ServiceCmd (12.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [69bff20f-13a6-4b27-b141-f14e06055948] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009136654s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220906145007-22187 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220906145007-22187 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220906145007-22187 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220906145007-22187 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [948bba5d-dbe2-481b-b13c-9bd44913d18f] Pending
E0906 14:52:47.062060   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.071813   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.083238   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.103913   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.144133   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.226347   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.386481   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:47.708486   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
helpers_test.go:342: "sp-pod" [948bba5d-dbe2-481b-b13c-9bd44913d18f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0906 14:52:48.348929   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 14:52:49.629175   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [948bba5d-dbe2-481b-b13c-9bd44913d18f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007284591s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220906145007-22187 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220906145007-22187 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [971d2eb8-8d5c-4879-85b6-e4f0b40b186c] Pending
helpers_test.go:342: "sp-pod" [971d2eb8-8d5c-4879-85b6-e4f0b40b186c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [971d2eb8-8d5c-4879-85b6-e4f0b40b186c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010576458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh -n functional-20220906145007-22187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 cp functional-20220906145007-22187:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd4244448124/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh -n functional-20220906145007-22187 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220906145007-22187 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-ptpjw" [380d867d-6e2b-432c-a055-e33af35b2990] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-ptpjw" [380d867d-6e2b-432c-a055-e33af35b2990] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.007323765s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;": exit status 1 (142.156481ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;": exit status 1 (112.781996ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;": exit status 1 (114.683698ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220906145007-22187 exec mysql-596b7fcdbf-ptpjw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/22187/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /etc/test/nested/copy/22187/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/22187.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /etc/ssl/certs/22187.pem"
E0906 14:53:28.031242   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/22187.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /usr/share/ca-certificates/22187.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/221872.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /etc/ssl/certs/221872.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/221872.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /usr/share/ca-certificates/221872.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220906145007-22187 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo systemctl is-active crio": exit status 1 (460.928199ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220906145007-22187 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220906145007-22187 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [0c03e889-9d4c-4972-bdf4-820d35395140] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [0c03e889-9d4c-4972-bdf4-820d35395140] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.042418245s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220906145007-22187 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220906145007-22187 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 24409: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0906 14:53:07.550286   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1310: Took "434.363987ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "76.288443ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "488.950288ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "118.940018ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3629599732/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1662501188711946000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3629599732/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1662501188711946000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3629599732/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1662501188711946000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3629599732/001/test-1662501188711946000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.905419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 21:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 21:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 21:53 test-1662501188711946000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh cat /mount-9p/test-1662501188711946000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220906145007-22187 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [066c9ea0-3411-4cf5-b04a-3abe6fae92b0] Pending
helpers_test.go:342: "busybox-mount" [066c9ea0-3411-4cf5-b04a-3abe6fae92b0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [066c9ea0-3411-4cf5-b04a-3abe6fae92b0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [066c9ea0-3411-4cf5-b04a-3abe6fae92b0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007087754s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220906145007-22187 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3629599732/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1015898888/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (435.012758ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1015898888/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh "sudo umount -f /mount-9p": exit status 1 (396.445433ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220906145007-22187 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1015898888/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220906145007-22187 docker-env) && out/minikube-darwin-amd64 status -p functional-20220906145007-22187"
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220906145007-22187 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.0
registry.k8s.io/kube-proxy:v1.25.0
registry.k8s.io/kube-controller-manager:v1.25.0
registry.k8s.io/kube-apiserver:v1.25.0
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220906145007-22187
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.25.0                         | 1a54c86c03a67 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.25.0                         | 58a9a0c6d96f2 | 61.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>                          | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-20220906145007-22187 | 93083a3bc99de | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-20220906145007-22187 | 699596e8a6d5c | 30B    |
| registry.k8s.io/etcd                        | 3.5.4-0                         | a8a176a5d5d69 | 300MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                             | daff57b7d2d1e | 430MB  |
| docker.io/library/nginx                     | alpine                          | 804f9cebfdc58 | 23.5MB |
| docker.io/kubernetesui/dashboard            | <none>                          | 1042d9e0d8fcc | 246MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3                          | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220906145007-22187 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.25.0                         | 4d2edfd10d3e3 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.0                         | bef2cf3115095 | 50.6MB |
| docker.io/library/nginx                     | latest                          | 2b7d6430f78d4 | 142MB  |
| registry.k8s.io/pause                       | 3.8                             | 4873874c08efc | 711kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format json:
[{"id":"93083a3bc99dea136099b074d8cc09db86a97cf0d353c7260d2d16a1eedb2f8e","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220906145007-22187"],"size":"1240000"},{"id":"daff57b7d2d1e009d0b271972f62dbf4de64b8cdb9cd646442aeda961e615f44","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"430000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220906145007-22187"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"56cc512116c8f894f11ce199
5460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"4d2edfd10d3e3f4395b70652848e2a1efd5bd0bc38e9bc360d4ee5c51afacfe5","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.0"],"size":"128000000"},{"id":"1a54c86c03a673d4e046b9f64854c713512d39a0136aef76a4a450d5ad51273e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.0"],"size":"117000000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda
1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"bef2cf3115095379b5af3e6c0fb4b0e6a8ef7a144aa2907bd0a3125e9d2e203e","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.0"],"size":"50600000"},{"id":"804f9cebfdc58964d6b25527e53802a3527a9ee880e082dc5b19a3d5466c43b7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"699596e8a6d5c05728d258226605f45bd72bed67c148025402e29bd02f1ae429","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220906145007-22187"],"size":"30"},{"id":"58a9a0c6d96f2b956afdc831504e6796c23f5f90a7b5341393b762d9ba96f2f6","repoDigests":[],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.25.0"],"size":"61700000"},{"id":"2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls --format yaml:
- id: daff57b7d2d1e009d0b271972f62dbf4de64b8cdb9cd646442aeda961e615f44
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "430000000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 699596e8a6d5c05728d258226605f45bd72bed67c148025402e29bd02f1ae429
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220906145007-22187
size: "30"
- id: 804f9cebfdc58964d6b25527e53802a3527a9ee880e082dc5b19a3d5466c43b7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 1a54c86c03a673d4e046b9f64854c713512d39a0136aef76a4a450d5ad51273e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.0
size: "117000000"
- id: bef2cf3115095379b5af3e6c0fb4b0e6a8ef7a144aa2907bd0a3125e9d2e203e
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.0
size: "50600000"
- id: 58a9a0c6d96f2b956afdc831504e6796c23f5f90a7b5341393b762d9ba96f2f6
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.0
size: "61700000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 4d2edfd10d3e3f4395b70652848e2a1efd5bd0bc38e9bc360d4ee5c51afacfe5
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.0
size: "128000000"
- id: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220906145007-22187 ssh pgrep buildkitd: exit status 1 (448.859759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image build -t localhost/my-image:functional-20220906145007-22187 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image build -t localhost/my-image:functional-20220906145007-22187 testdata/build: (2.015787816s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image build -t localhost/my-image:functional-20220906145007-22187 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f97dd0c47bd5
Removing intermediate container f97dd0c47bd5
---> 60ff3d6fab37
Step 3/3 : ADD content.txt /
---> 93083a3bc99d
Successfully built 93083a3bc99d
Successfully tagged localhost/my-image:functional-20220906145007-22187
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.707833712s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
2022/09/06 14:53:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187: (3.773995104s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187: (2.164177699s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.581943753s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187: (4.131636666s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image save gcr.io/google-containers/addon-resizer:functional-20220906145007-22187 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image save gcr.io/google-containers/addon-resizer:functional-20220906145007-22187 /Users/jenkins/workspace/addon-resizer-save.tar: (1.832457803s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image rm gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.814927288s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220906145007-22187 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220906145007-22187 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220906145007-22187: (2.331935013s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220906145007-22187
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220906145007-22187
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220906145007-22187
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220906150114-22187 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220906150114-22187 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (45.394499655s)
--- PASS: TestJSONOutput/start/Command (45.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220906150114-22187 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220906150114-22187 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.25s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220906150114-22187 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220906150114-22187 --output=json --user=testUser: (12.245460177s)
--- PASS: TestJSONOutput/stop/Command (12.25s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220906150215-22187 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220906150215-22187 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (327.706829ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90f66707-6011-49ca-b1f1-216c1e8a12bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220906150215-22187] minikube v1.26.1 on Darwin 12.5.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8bac9e1-7b30-4e03-9678-218532914240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14848"}}
	{"specversion":"1.0","id":"73e6cb90-2f06-4ec6-9cc1-a5ddc6301f4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig"}}
	{"specversion":"1.0","id":"2743d61e-8bc4-4437-96fb-f8763b77f1cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"6a7bf44d-d9da-446f-8d7d-69aa4bcae633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9905e57d-b792-4c7f-9508-d0b832cfb008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube"}}
	{"specversion":"1.0","id":"984779fc-2dac-49cf-88d7-88c78bb5179c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220906150215-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220906150215-22187
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220906150216-22187 --network=
E0906 15:02:41.252057   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:02:47.067939   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220906150216-22187 --network=: (31.606043178s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220906150216-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220906150216-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220906150216-22187: (2.657197972s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220906150250-22187 --network=bridge
E0906 15:03:08.950526   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220906150250-22187 --network=bridge: (29.846015455s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220906150250-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220906150250-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220906150250-22187: (2.512105042s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.42s)

                                                
                                    
x
+
TestKicExistingNetwork (32.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220906150323-22187 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220906150323-22187 --network=existing-network: (29.855288764s)
helpers_test.go:175: Cleaning up "existing-network-20220906150323-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220906150323-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220906150323-22187: (2.472460104s)
--- PASS: TestKicExistingNetwork (32.73s)

                                                
                                    
x
+
TestKicCustomSubnet (34.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220906150355-22187 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220906150355-22187 --subnet=192.168.60.0/24: (31.38934494s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220906150355-22187 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220906150355-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220906150355-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220906150355-22187: (2.715737266s)
--- PASS: TestKicCustomSubnet (34.17s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220906150430-22187 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220906150430-22187 --driver=docker : (29.952883082s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220906150430-22187 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220906150430-22187 --driver=docker : (30.240943365s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220906150430-22187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220906150430-22187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220906150430-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220906150430-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220906150430-22187: (2.714296187s)
helpers_test.go:175: Cleaning up "first-20220906150430-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220906150430-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220906150430-22187: (2.689467028s)
--- PASS: TestMinikubeProfile (67.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220906150537-22187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220906150537-22187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.75415085s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220906150537-22187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220906150537-22187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220906150537-22187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.713972189s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220906150537-22187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.21s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220906150537-22187 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220906150537-22187 --alsologtostderr -v=5: (2.211231566s)
--- PASS: TestMountStart/serial/DeleteFirst (2.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220906150537-22187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220906150537-22187
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220906150537-22187: (1.620709463s)
--- PASS: TestMountStart/serial/Stop (1.62s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220906150537-22187
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220906150537-22187: (4.629202414s)
--- PASS: TestMountStart/serial/RestartStopped (5.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220906150537-22187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0906 15:07:41.254939   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:07:47.070980   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220906150606-22187 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m48.96643742s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- rollout status deployment/busybox: (2.611187885s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-ppptb -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-trdqs -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-ppptb -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-trdqs -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-ppptb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-trdqs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-ppptb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-ppptb -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-trdqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220906150606-22187 -- exec busybox-65db55d5d6-trdqs -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220906150606-22187 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220906150606-22187 -v 3 --alsologtostderr: (24.01350074s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr: (1.000353682s)
--- PASS: TestMultiNode/serial/AddNode (25.01s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp testdata/cp-test.txt multinode-20220906150606-22187:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187:/home/docker/cp-test.txt multinode-20220906150606-22187-m02:/home/docker/cp-test_multinode-20220906150606-22187_multinode-20220906150606-22187-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187_multinode-20220906150606-22187-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187:/home/docker/cp-test.txt multinode-20220906150606-22187-m03:/home/docker/cp-test_multinode-20220906150606-22187_multinode-20220906150606-22187-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187_multinode-20220906150606-22187-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp testdata/cp-test.txt multinode-20220906150606-22187-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m02:/home/docker/cp-test.txt multinode-20220906150606-22187-m03:/home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187-m02_multinode-20220906150606-22187-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp testdata/cp-test.txt multinode-20220906150606-22187-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile219338308/001/cp-test_multinode-20220906150606-22187-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt multinode-20220906150606-22187:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 cp multinode-20220906150606-22187-m03:/home/docker/cp-test.txt multinode-20220906150606-22187-m02:/home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 ssh -n multinode-20220906150606-22187-m02 "sudo cat /home/docker/cp-test_multinode-20220906150606-22187-m03_multinode-20220906150606-22187-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node stop m03: (12.359739767s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status: exit status 7 (779.070624ms)

                                                
                                                
-- stdout --
	multinode-20220906150606-22187
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220906150606-22187-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220906150606-22187-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr: exit status 7 (776.966677ms)

                                                
                                                
-- stdout --
	multinode-20220906150606-22187
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220906150606-22187-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220906150606-22187-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:08:55.520695   28335 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:08:55.520862   28335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:08:55.520867   28335 out.go:309] Setting ErrFile to fd 2...
	I0906 15:08:55.520870   28335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:08:55.520968   28335 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:08:55.521124   28335 out.go:303] Setting JSON to false
	I0906 15:08:55.521139   28335 mustload.go:65] Loading cluster: multinode-20220906150606-22187
	I0906 15:08:55.521429   28335 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:08:55.521439   28335 status.go:253] checking status of multinode-20220906150606-22187 ...
	I0906 15:08:55.521839   28335 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:08:55.584843   28335 status.go:328] multinode-20220906150606-22187 host status = "Running" (err=<nil>)
	I0906 15:08:55.584869   28335 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:08:55.585177   28335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187
	I0906 15:08:55.648547   28335 host.go:66] Checking if "multinode-20220906150606-22187" exists ...
	I0906 15:08:55.648824   28335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:08:55.648877   28335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:08:55.713831   28335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56913 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187/id_rsa Username:docker}
	I0906 15:08:55.794214   28335 ssh_runner.go:195] Run: systemctl --version
	I0906 15:08:55.798596   28335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:08:55.807334   28335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220906150606-22187
	I0906 15:08:55.871850   28335 kubeconfig.go:92] found "multinode-20220906150606-22187" server: "https://127.0.0.1:56912"
	I0906 15:08:55.871875   28335 api_server.go:165] Checking apiserver status ...
	I0906 15:08:55.871916   28335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 15:08:55.881581   28335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1633/cgroup
	W0906 15:08:55.889386   28335 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1633/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 15:08:55.889400   28335 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56912/healthz ...
	I0906 15:08:55.894866   28335 api_server.go:266] https://127.0.0.1:56912/healthz returned 200:
	ok
	I0906 15:08:55.894880   28335 status.go:419] multinode-20220906150606-22187 apiserver status = Running (err=<nil>)
	I0906 15:08:55.894889   28335 status.go:255] multinode-20220906150606-22187 status: &{Name:multinode-20220906150606-22187 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 15:08:55.894904   28335 status.go:253] checking status of multinode-20220906150606-22187-m02 ...
	I0906 15:08:55.895135   28335 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:08:55.959041   28335 status.go:328] multinode-20220906150606-22187-m02 host status = "Running" (err=<nil>)
	I0906 15:08:55.959069   28335 host.go:66] Checking if "multinode-20220906150606-22187-m02" exists ...
	I0906 15:08:55.959331   28335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220906150606-22187-m02
	I0906 15:08:56.023007   28335 host.go:66] Checking if "multinode-20220906150606-22187-m02" exists ...
	I0906 15:08:56.023245   28335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 15:08:56.023301   28335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220906150606-22187-m02
	I0906 15:08:56.086961   28335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56972 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/machines/multinode-20220906150606-22187-m02/id_rsa Username:docker}
	I0906 15:08:56.170234   28335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 15:08:56.179696   28335 status.go:255] multinode-20220906150606-22187-m02 status: &{Name:multinode-20220906150606-22187-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 15:08:56.179718   28335 status.go:253] checking status of multinode-20220906150606-22187-m03 ...
	I0906 15:08:56.179970   28335 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m03 --format={{.State.Status}}
	I0906 15:08:56.246070   28335 status.go:328] multinode-20220906150606-22187-m03 host status = "Stopped" (err=<nil>)
	I0906 15:08:56.246091   28335 status.go:341] host is not running, skipping remaining checks
	I0906 15:08:56.246098   28335 status.go:255] multinode-20220906150606-22187-m03 status: &{Name:multinode-20220906150606-22187-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.92s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node start m03 --alsologtostderr
E0906 15:09:10.124285   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node start m03 --alsologtostderr: (18.076886414s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status: (1.008628679s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 node delete m03: (7.05722386s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (7.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 stop: (24.646920817s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status: exit status 7 (177.250501ms)

                                                
                                                
-- stdout --
	multinode-20220906150606-22187
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220906150606-22187-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220906150606-22187 status --alsologtostderr: exit status 7 (176.430537ms)

                                                
                                                
-- stdout --
	multinode-20220906150606-22187
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220906150606-22187-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 15:13:46.852763   29019 out.go:296] Setting OutFile to fd 1 ...
	I0906 15:13:46.852956   29019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:46.852961   29019 out.go:309] Setting ErrFile to fd 2...
	I0906 15:13:46.852964   29019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 15:13:46.853068   29019 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/bin
	I0906 15:13:46.853240   29019 out.go:303] Setting JSON to false
	I0906 15:13:46.853256   29019 mustload.go:65] Loading cluster: multinode-20220906150606-22187
	I0906 15:13:46.853530   29019 config.go:180] Loaded profile config "multinode-20220906150606-22187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
	I0906 15:13:46.853542   29019 status.go:253] checking status of multinode-20220906150606-22187 ...
	I0906 15:13:46.853901   29019 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187 --format={{.State.Status}}
	I0906 15:13:46.916496   29019 status.go:328] multinode-20220906150606-22187 host status = "Stopped" (err=<nil>)
	I0906 15:13:46.916527   29019 status.go:341] host is not running, skipping remaining checks
	I0906 15:13:46.916535   29019 status.go:255] multinode-20220906150606-22187 status: &{Name:multinode-20220906150606-22187 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 15:13:46.916564   29019 status.go:253] checking status of multinode-20220906150606-22187-m02 ...
	I0906 15:13:46.916855   29019 cli_runner.go:164] Run: docker container inspect multinode-20220906150606-22187-m02 --format={{.State.Status}}
	I0906 15:13:46.978135   29019 status.go:328] multinode-20220906150606-22187-m02 host status = "Stopped" (err=<nil>)
	I0906 15:13:46.978175   29019 status.go:341] host is not running, skipping remaining checks
	I0906 15:13:46.978189   29019 status.go:255] multinode-20220906150606-22187-m02 status: &{Name:multinode-20220906150606-22187-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220906150606-22187
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220906150606-22187-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220906150606-22187-m02 --driver=docker : exit status 14 (393.081538ms)

                                                
                                                
-- stdout --
	* [multinode-20220906150606-22187-m02] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220906150606-22187-m02' is duplicated with machine name 'multinode-20220906150606-22187-m02' in profile 'multinode-20220906150606-22187'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220906150606-22187-m03 --driver=docker 
E0906 15:17:41.257812   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:17:47.071751   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220906150606-22187-m03 --driver=docker : (31.457355415s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220906150606-22187
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220906150606-22187: exit status 80 (484.42189ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220906150606-22187
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220906150606-22187-m03 already exists in multinode-20220906150606-22187-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220906150606-22187-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220906150606-22187-m03: (2.70473287s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.09s)

                                                
                                    
x
+
TestScheduledStopUnix (101.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220906152228-22187 --memory=2048 --driver=docker 
E0906 15:22:41.259102   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:22:47.072978   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220906152228-22187 --memory=2048 --driver=docker : (27.121220287s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220906152228-22187 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220906152228-22187 -n scheduled-stop-20220906152228-22187
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220906152228-22187 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220906152228-22187 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220906152228-22187 -n scheduled-stop-20220906152228-22187
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220906152228-22187
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220906152228-22187 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220906152228-22187
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220906152228-22187: exit status 7 (117.952085ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220906152228-22187
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220906152228-22187 -n scheduled-stop-20220906152228-22187
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220906152228-22187 -n scheduled-stop-20220906152228-22187: exit status 7 (115.826613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220906152228-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220906152228-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220906152228-22187: (2.392246087s)
--- PASS: TestScheduledStopUnix (101.48s)

                                                
                                    
x
+
TestSkaffold (59.65s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1568913927 version
skaffold_test.go:63: skaffold version: v1.39.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220906152410-22187 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220906152410-22187 --memory=2600 --driver=docker : (26.062929066s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1568913927 run --minikube-profile skaffold-20220906152410-22187 --kube-context skaffold-20220906152410-22187 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1568913927 run --minikube-profile skaffold-20220906152410-22187 --kube-context skaffold-20220906152410-22187 --status-check=true --port-forward=false --interactive=false: (18.89444441s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-59fd59f745-wlpww" [e1203e2f-1e93-4e9a-a331-fde277b33056] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012159195s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-6dc9545f84-p7hgv" [7bb818ae-2a94-4388-90a9-ec4ee399ec91] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008951875s
helpers_test.go:175: Cleaning up "skaffold-20220906152410-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220906152410-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220906152410-22187: (2.979046476s)
--- PASS: TestSkaffold (59.65s)

                                                
                                    
x
+
TestInsufficientStorage (12.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220906152509-22187 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220906152509-22187 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.006177715s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3803df81-2f8b-4496-be2f-749c432c0768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220906152509-22187] minikube v1.26.1 on Darwin 12.5.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d972813-c6dd-44fd-86dd-3a2327976076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14848"}}
	{"specversion":"1.0","id":"c47a5700-7d3e-4e77-9134-c9e47a84c9db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig"}}
	{"specversion":"1.0","id":"bf86e1c7-248e-4054-9c86-8df7563b39d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5375b8f5-4268-4464-b618-9a4cac6a4d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1fff966-cdc9-46f8-863a-0cf7fabe3797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube"}}
	{"specversion":"1.0","id":"ef740e4c-05e7-4938-a0ce-1a2e27622ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5292344f-0e61-4e2a-8b0f-fb7baffb16d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9a8a2a00-db00-4d88-8b0d-2b8aaefc4ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c8f0203-6d56-4289-8528-ad04dc8d8e08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"13fc03e8-0c5d-41e0-aab1-9c83ca241d34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220906152509-22187 in cluster insufficient-storage-20220906152509-22187","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6180b639-c89a-4357-825a-25b185f1d942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a892258-5ccc-4ba8-994e-c77028f4253d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1692280-cd19-4f3e-9547-0e0d98a918a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220906152509-22187 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220906152509-22187 --output=json --layout=cluster: exit status 7 (453.077875ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220906152509-22187","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220906152509-22187","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:25:19.409936   30718 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220906152509-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220906152509-22187 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220906152509-22187 --output=json --layout=cluster: exit status 7 (404.704968ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220906152509-22187","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220906152509-22187","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 15:25:19.815704   30728 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220906152509-22187" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	E0906 15:25:19.824057   30728 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/insufficient-storage-20220906152509-22187/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220906152509-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220906152509-22187
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220906152509-22187: (2.440345387s)
--- PASS: TestInsufficientStorage (12.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220906152634-22187
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220906152634-22187: (3.495018921s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.50s)

                                                
                                    
x
+
TestPause/serial/Start (43.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220906152815-22187 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220906152815-22187 --memory=2048 --install-addons=false --wait=all --driver=docker : (43.910932058s)
--- PASS: TestPause/serial/Start (43.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (407.669012ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220906153018-22187] minikube v1.26.1 on Darwin 12.5.1
	  - MINIKUBE_LOCATION=14848
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --driver=docker : (29.708377979s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220906153018-22187 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --driver=docker : (14.168106874s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220906153018-22187 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220906153018-22187 status -o json: exit status 2 (430.330705ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220906153018-22187","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220906153018-22187
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220906153018-22187: (2.540844898s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --no-kubernetes --driver=docker : (6.775218322s)
--- PASS: TestNoKubernetes/serial/Start (6.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220906153018-22187 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220906153018-22187 "sudo systemctl is-active --quiet service kubelet": exit status 1 (420.915856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (3.755695977s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220906153018-22187

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220906153018-22187: (1.680244552s)
--- PASS: TestNoKubernetes/serial/Stop (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220906153018-22187 --driver=docker : (5.249625244s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220906153018-22187 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220906153018-22187 "sudo systemctl is-active --quiet service kubelet": exit status 1 (427.158746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.82s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.1 on darwin
- MINIKUBE_LOCATION=14848
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946861740/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946861740/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946861740/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946861740/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.82s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.1 on darwin
- MINIKUBE_LOCATION=14848
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3560524106/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3560524106/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3560524106/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3560524106/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (309.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220906152523-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0906 15:34:56.967427   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:35:24.657443   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220906152523-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m9.506243109s)
--- PASS: TestNetworkPlugins/group/calico/Start (309.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (44.791990072s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jrh94" [b8f681fc-6fde-4648-8c03-68659137f8bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jrh94" [b8f681fc-6fde-4648-8c03-68659137f8bb] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006485975s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.115832858s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (44.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (44.784987971s)
--- PASS: TestNetworkPlugins/group/false/Start (44.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8chgt" [f1ece133-0291-43ae-bdd3-878d37556060] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 15:37:41.269004   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-8chgt" [f1ece133-0291-43ae-bdd3-878d37556060] Running
E0906 15:37:47.083768   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.006831773s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.121260541s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (49.703322436s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-hqqjh" [4a3e9ef8-d210-4af7-974b-cef28e743b92] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017414233s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220906152523-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220906152523-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8sv6h" [41fc2f1e-684f-4b53-a4fa-b0c466d737bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-8sv6h" [41fc2f1e-684f-4b53-a4fa-b0c466d737bd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006545713s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-m6r7q" [89ebf540-11bc-44b2-a453-ac961425e316] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014586972s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220906152523-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220906152523-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-xdzq2" [fcae61c7-9b55-4840-b263-b0fb3dbea03a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-xdzq2" [fcae61c7-9b55-4840-b263-b0fb3dbea03a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.006899748s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220906152523-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (46.74664752s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (45.497654335s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-cnxw8" [0a74573e-32bf-4645-8afb-5fae95a0336d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-cnxw8" [0a74573e-32bf-4645-8afb-5fae95a0336d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008022693s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-zsnqj" [d694fcd3-658f-452d-9394-6aafe628d375] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 15:39:56.967679   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-zsnqj" [d694fcd3-658f-452d-9394-6aafe628d375] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006634727s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220906152522-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (45.102189124s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (73.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220906152523-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220906152523-22187 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m13.888380428s)
--- PASS: TestNetworkPlugins/group/cilium/Start (73.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220906152522-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-j46n9" [ef5be7bd-434e-4d7c-ac0a-6dae03d9a919] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-j46n9" [ef5be7bd-434e-4d7c-ac0a-6dae03d9a919] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.006579751s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220906152522-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-nbv5x" [e8660e3b-7dc1-46ad-8e61-27e2617a87d9] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017522979s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220906152523-22187 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220906152523-22187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-t8zz5" [186da1a4-d3c4-44d8-b82e-026e65f4211b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-t8zz5" [186da1a4-d3c4-44d8-b82e-026e65f4211b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.00708202s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220906152523-22187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220906152523-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220906152523-22187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220906154156-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.0
E0906 15:41:58.036124   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:18.517221   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:30.138651   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 15:42:41.106007   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.111264   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.121623   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.142246   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.182443   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.263211   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.268994   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:42:41.423517   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:41.744969   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:42.385669   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:43.665871   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:46.226507   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:42:47.083187   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220906154156-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.0: (51.15590653s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [1033d537-8088-4972-bc47-d1bf4e1cf9a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0906 15:42:51.346676   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
helpers_test.go:342: "busybox" [1033d537-8088-4972-bc47-d1bf4e1cf9a5] Running
E0906 15:42:59.479514   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.01289321s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220906154156-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220906154156-22187 --alsologtostderr -v=3
E0906 15:43:01.587079   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220906154156-22187 --alsologtostderr -v=3: (12.429889191s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187: exit status 7 (117.192938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220906154156-22187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (304.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220906154156-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.0
E0906 15:43:22.068041   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:37.698251   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:37.704039   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:37.716260   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:37.737073   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:37.777314   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:37.858591   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:38.020012   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:38.340576   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:38.980799   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:40.261004   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:42.821749   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:47.942062   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:49.005562   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.011254   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.021453   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.041794   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.083886   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.200846   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.362959   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:49.683634   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:50.324117   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:51.605972   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:54.166590   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:43:58.183122   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:43:59.288871   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:03.028230   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:09.529805   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:18.664394   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:44:21.399803   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:30.010006   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.081841   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.086978   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.097674   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.119609   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.161511   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.243027   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.405304   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:45.726799   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:46.368930   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:47.651589   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:50.213357   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:55.336894   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.186836   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.191980   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.202857   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.223546   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.263748   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.345346   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.505524   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.826839   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:56.984627   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
E0906 15:44:57.467760   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:58.748605   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:44:59.642339   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:45:01.310604   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:05.581287   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:06.434130   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:10.992482   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:16.676719   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:24.970652   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:26.064544   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:37.159843   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.043823   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.050187   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.062327   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.082860   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.180728   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.260846   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.420964   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:44.742397   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:45.384614   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:46.664794   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:49.227041   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:45:54.347377   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220906154156-22187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.0: (5m3.808485596s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220906154156-22187 -n no-preload-20220906154156-22187
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (304.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220906154143-22187 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220906154143-22187 --alsologtostderr -v=3: (1.617223445s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220906154143-22187 -n old-k8s-version-20220906154143-22187: exit status 7 (116.369346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220906154143-22187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-4v92l" [14065c20-36c8-457b-a3ae-c7a4132e59f4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-54596f475f-4v92l" [14065c20-36c8-457b-a3ae-c7a4132e59f4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.015461166s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-4v92l" [14065c20-36c8-457b-a3ae-c7a4132e59f4] Running
E0906 15:48:27.952986   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006025598s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220906154156-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220906154156-22187 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (44.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220906154915-22187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.0
E0906 15:49:16.759633   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:49:45.094935   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:49:56.193688   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:49:56.991761   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220906154915-22187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.0: (44.917422458s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (44.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [47480249-37d7-4638-9b26-f378bb8bc497] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [47480249-37d7-4638-9b26-f378bb8bc497] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.01288451s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220906154915-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220906154915-22187 --alsologtostderr -v=3
E0906 15:50:12.787033   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220906154915-22187 --alsologtostderr -v=3: (12.512007607s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187: exit status 7 (114.933162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220906154915-22187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (297.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220906154915-22187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.0
E0906 15:50:23.883818   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:50:44.042824   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:51:11.794521   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kubenet-20220906152522-22187/client.crt: no such file or directory
E0906 15:51:24.327686   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:51:37.573091   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/auto-20220906152522-22187/client.crt: no such file or directory
E0906 15:51:52.016349   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/cilium-20220906152523-22187/client.crt: no such file or directory
E0906 15:52:41.129455   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/false-20220906152522-22187/client.crt: no such file or directory
E0906 15:52:41.292967   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/functional-20220906145007-22187/client.crt: no such file or directory
E0906 15:52:47.108917   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory
E0906 15:52:47.471380   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.476448   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.488580   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.508714   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.549778   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.630069   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:47.790437   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:48.110692   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:48.750851   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:50.033097   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:52.595335   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:52:57.715467   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:53:07.955659   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:53:28.435955   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:53:37.722460   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/calico-20220906152523-22187/client.crt: no such file or directory
E0906 15:53:49.029865   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/kindnet-20220906152522-22187/client.crt: no such file or directory
E0906 15:54:09.397604   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
E0906 15:54:45.095622   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
E0906 15:54:56.194435   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/bridge-20220906152522-22187/client.crt: no such file or directory
E0906 15:54:56.992760   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/skaffold-20220906152410-22187/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220906154915-22187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.0: (4m56.763916932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220906154915-22187 -n default-k8s-different-port-20220906154915-22187
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (297.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-q5gxc" [5c52cb40-b9c3-4910-87ba-7c97614ca12e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-54596f475f-q5gxc" [5c52cb40-b9c3-4910-87ba-7c97614ca12e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.014569812s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-q5gxc" [5c52cb40-b9c3-4910-87ba-7c97614ca12e] Running
E0906 15:55:31.320053   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/no-preload-20220906154156-22187/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009225852s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220906154915-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220906154915-22187 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220906155618-22187 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220906155618-22187 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.0: (40.733387676s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220906155618-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220906155618-22187 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220906155618-22187 --alsologtostderr -v=3: (12.467643908s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187: exit status 7 (118.113243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220906155618-22187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220906155618-22187 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220906155618-22187 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.0: (17.227429447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220906155618-22187 -n newest-cni-20220906155618-22187
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220906155618-22187 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220906155821-22187 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.0

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220906155821-22187 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.0: (42.945588497s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b69036b3-fe97-4eec-b780-1eac49908542] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [b69036b3-fe97-4eec-b780-1eac49908542] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.012774834s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220906155821-22187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220906155821-22187 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220906155821-22187 --alsologtostderr -v=3: (12.534357111s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187: exit status 7 (115.967055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220906155821-22187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220906155821-22187 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.0

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220906155821-22187 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.0: (5m1.360585349s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220906155821-22187 -n embed-certs-20220906155821-22187
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-8dtl6" [b44daf06-cea8-4179-b626-1a1e13fc9778] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-54596f475f-8dtl6" [b44daf06-cea8-4179-b626-1a1e13fc9778] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.011723243s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-8dtl6" [b44daf06-cea8-4179-b626-1a1e13fc9778] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007433246s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220906155821-22187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0906 16:04:45.083092   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/enable-default-cni-20220906152522-22187/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220906155821-22187 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                    

Test skip (18/287)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 11.924164ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-hbccx" [4eb0ed31-b260-48f0-8bb6-33dbe010f3a9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010651851s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-wjwvq" [c6ba6b9c-db4d-4248-828c-5fc66eb48fce] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009316897s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220906144437-22187 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220906144437-22187 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220906144437-22187 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.832167715s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220906144437-22187 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220906144437-22187 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220906144437-22187 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [1ba357a5-64b4-4e1c-a0af-4118de569b4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [1ba357a5-64b4-4e1c-a0af-4118de569b4c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013440709s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220906144437-22187 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220906145007-22187 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220906145007-22187 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-dp99q" [a8e5a7c5-eb45-400d-a614-b19222d48ba5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0906 14:52:52.189460   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-dp99q" [a8e5a7c5-eb45-400d-a614-b19222d48ba5] Running
E0906 14:52:57.309870   22187 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14848-20969-b63acb7dafa1eea311309da4a351492ab3bac7a2/.minikube/profiles/addons-20220906144437-22187/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006417896s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220906152522-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220906152522-22187
--- SKIP: TestNetworkPlugins/group/flannel (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220906152522-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220906152522-22187
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220906155820-22187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220906155820-22187
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
Copied to clipboard